Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry
2017-05-01
The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O
2004-07-30
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.
Harold R. Offord
1966-01-01
Sequential sampling based on a negative binomial distribution of ribes populations required less than half the time taken by regular systematic line transect sampling in a comparison test. It gave the same control decision as the regular method in 9 of 13 field trials. A computer program that permits sequential plans to be built readily for other white pine regions is...
Lara, Jesus R; Hoddle, Mark S
2015-08-01
Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
McGraw, Benjamin A; Koppenhöfer, Albrecht M
2009-06-01
Binomial sequential sampling plans were developed to forecast weevil Listronotus maculicollis Kirby (Coleoptera: Curculionidae), larval damage to golf course turfgrass and aid in the development of integrated pest management programs for the weevil. Populations of emerging overwintered adults were sampled over a 2-yr period to determine the relationship between adult counts, larval density, and turfgrass damage. Larval density and composition of preferred host plants (Poa annua L.) significantly affected the expression of turfgrass damage. Multiple regression indicates that damage may occur in moderately mixed P. annua stands with as few as 10 larvae per 0.09 m2. However, > 150 larvae were required before damage became apparent in pure Agrostis stolonifera L. plots. Adult counts during peaks in emergence as well as cumulative counts across the emergence period were significantly correlated to future densities of larvae. Eight binomial sequential sampling plans based on two tally thresholds for classifying infestation (T = 1 and two adults) and four adult density thresholds (0.5, 0.85, 1.15, and 1.35 per 3.34 m2) were developed to forecast the likelihood of turfgrass damage by using adult counts during peak emergence. Resampling for validation of sample plans software was used to validate sampling plans with field-collected data sets. All sampling plans were found to deliver accurate classifications (correct decisions were made between 84.4 and 96.8%) in a practical timeframe (average sampling cost < 22.7 min).
Galvan, T L; Burkness, E C; Hutchison, W D
2007-06-01
To develop a practical integrated pest management (IPM) system for the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in wine grapes, we assessed the spatial distribution of H. axyridis and developed eight sampling plans to estimate adult density or infestation level in grape clusters. We used 49 data sets collected from commercial vineyards in 2004 and 2005, in Minnesota and Wisconsin. Enumerative plans were developed using two precision levels (0.10 and 0.25); the six binomial plans reflected six unique action thresholds (3, 7, 12, 18, 22, and 31% of cluster samples infested with at least one H. axyridis). The spatial distribution of H. axyridis in wine grapes was aggregated, independent of cultivar and year, but it was more randomly distributed as mean density declined. The average sample number (ASN) for each sampling plan was determined using resampling software. For research purposes, an enumerative plan with a precision level of 0.10 (SE/X) resulted in a mean ASN of 546 clusters. For IPM applications, the enumerative plan with a precision level of 0.25 resulted in a mean ASN of 180 clusters. In contrast, the binomial plans resulted in much lower ASNs and provided high probabilities of arriving at correct "treat or no-treat" decisions, making these plans more efficient for IPM applications. For a tally threshold of one adult per cluster, the operating characteristic curves for the six action thresholds provided binomial sequential sampling plans with mean ASNs of only 19-26 clusters, and probabilities of making correct decisions between 83 and 96%. The benefits of the binomial sampling plans are discussed within the context of improving IPM programs for wine grapes.
Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino
2015-09-01
The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of <0.10. The enumerative sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.
Burkness, Eric C; Hutchison, W D
2009-10-01
Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.
Silva, Ivair R
2018-01-15
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.
Simplified pupal surveys of Aedes aegypti (L.) for entomologic surveillance and dengue control.
Barrera, Roberto
2009-07-01
Pupal surveys of Aedes aegypti (L.) are useful indicators of risk for dengue transmission, although sample sizes for reliable estimations can be large. This study explores two methods for making pupal surveys more practical yet reliable and used data from 10 pupal surveys conducted in Puerto Rico during 2004-2008. The number of pupae per person for each sampling followed a negative binomial distribution, thus showing aggregation. One method found a common aggregation parameter (k) for the negative binomial distribution, a finding that enabled the application of a sequential sampling method requiring few samples to determine whether the number of pupae/person was above a vector density threshold for dengue transmission. A second approach used the finding that the mean number of pupae/person is correlated with the proportion of pupa-infested households and calculated equivalent threshold proportions of pupa-positive households. A sequential sampling program was also developed for this method to determine whether observed proportions of infested households were above threshold levels. These methods can be used to validate entomological thresholds for dengue transmission.
Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T
2007-01-01
The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.
Paula-Moraes, S; Burkness, E C; Hunt, T E; Wright, R J; Hein, G L; Hutchison, W D
2011-12-01
Striacosta albicosta (Smith) (Lepidoptera: Noctuidae), is a native pest of dry beans (Phaseolus vulgaris L.) and corn (Zea mays L.). As a result of larval feeding damage on corn ears, S. albicosta has a narrow treatment window; thus, early detection of the pest in the field is essential, and egg mass sampling has become a popular monitoring tool. Three action thresholds for field and sweet corn currently are used by crop consultants, including 4% of plants infested with egg masses on sweet corn in the silking-tasseling stage, 8% of plants infested with egg masses on field corn with approximately 95% tasseled, and 20% of plants infested with egg masses on field corn during mid-milk-stage corn. The current monitoring recommendation is to sample 20 plants at each of five locations per field (100 plants total). In an effort to develop a more cost-effective sampling plan for S. albicosta egg masses, several alternative binomial sampling plans were developed using Wald's sequential probability ratio test, and validated using Resampling for Validation of Sampling Plans (RVSP) software. The benefit-cost ratio also was calculated and used to determine the final selection of sampling plans. Based on final sampling plans selected for each action threshold, the average sample number required to reach a treat or no-treat decision ranged from 38 to 41 plants per field. This represents a significant savings in sampling cost over the current recommendation of 100 plants.
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.
Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C
2017-04-01
The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Clarke-Harris, Dionne; Fleischer, Shelby J
2003-06-01
Although vegetable amaranth, Amaranthus viridis L. and A. dubius Mart. ex Thell., production and economic importance is increasing in diversified peri-urban farms in Jamaica, lepidopteran herbivory is common even during weekly pyrethroid applications. We developed and validated a sampling plan, and investigated insecticides with new modes of action, for a complex of five species (Pyralidae: Spoladea recurvalis (F.), Herpetogramma bipunctalis (F.), Noctuidae: Spodoptera exigua (Hubner), S. frugiperda (J. E. Smith), and S. eridania Stoll). Significant within-plant variation occurred with H. bipunctalis, and a six-leaf sample unit including leaves from the inner and outer whorl was selected to sample all species. Larval counts best fit a negative binomial distribution. We developed a sequential sampling plan using a threshold of one larva per sample unit and the fitted distribution with a k(c) of 0.645. When compared with a fixed plan of 25 plants, sequential sampling recommended the same management decision on 87.5%, additional samples on 9.4%, and gave inaccurate recommendations on 3.1% of 32 farms, while reducing sample size by 46%. Insecticide frequency was reduced 33-60% when management decisions were based on sampled data compared with grower-standards, with no effect on crop damage. Damage remained high or variable (10-46%) with pyrethroid applications. Lepidopteran control was dramatically improved with ecdysone agonists (tebufenozide) or microbial metabolites (spinosyns and emamectin benzoate). This work facilitates resistance management efforts concurrent with the introduction of newer modes of action for lepidopteran control in leafy vegetable production in the Caribbean.
Grigolli, J F J; Souza, L A; Fernandes, M G; Busoli, A C
2017-08-01
The cotton boll weevil Anthonomus grandis Boheman (Coleoptera: Curculionidae) is the main pest in cotton crop around the world, directly affecting cotton production. In order to establish a sequential sampling plan, it is crucial to understand the spatial distribution of the pest population and the damage it causes to the crop through the different developmental stages of cotton plants. Therefore, this study aimed to investigate the spatial distribution of adults in the cultivation area and their oviposition and feeding behavior throughout the development of the cotton plants. The experiment was conducted in Maracaju, Mato Grosso do Sul, Brazil, in the 2012/2013 and 2013/2014 growing seasons, in an area of 10,000 m 2 , planted with the cotton cultivar FM 993. The experimental area was divided into 100 plots of 100 m 2 (10 × 10 m) each, and five plants per plot were sampled weekly throughout the crop cycle. The number of flower buds with feeding and oviposition punctures and of adult A. grandis was recorded throughout the crop cycle in five plants per plot. After determining the aggregation indices (variance/mean ratio, Morisita's index, exponent k of the negative binomial distribution, and Green's coefficient) and adjusting the frequencies observed in the field to the distribution of frequencies (Poisson, negative binomial, and positive binomial) using the chi-squared test, it was observed that flower buds with punctures derived from feeding, oviposition, and feeding + oviposition showed an aggregated distribution in the cultivation area until 85 days after emergence and a random distribution after this stage. The adults of A. grandis presented a random distribution in the cultivation area.
Aly, Sharif S; Zhao, Jianyang; Li, Ben; Jiang, Jiming
2014-01-01
The Intraclass Correlation Coefficient (ICC) is commonly used to estimate the similarity between quantitative measures obtained from different sources. Overdispersed data is traditionally transformed so that linear mixed model (LMM) based ICC can be estimated. A common transformation used is the natural logarithm. The reliability of environmental sampling of fecal slurry on freestall pens has been estimated for Mycobacterium avium subsp. paratuberculosis using the natural logarithm transformed culture results. Recently, the negative binomial ICC was defined based on a generalized linear mixed model for negative binomial distributed data. The current study reports on the negative binomial ICC estimate which includes fixed effects using culture results of environmental samples. Simulations using a wide variety of inputs and negative binomial distribution parameters (r; p) showed better performance of the new negative binomial ICC compared to the ICC based on LMM even when negative binomial data was logarithm, and square root transformed. A second comparison that targeted a wider range of ICC values showed that the mean of estimated ICC closely approximated the true ICC.
Adjusted Wald Confidence Interval for a Difference of Binomial Proportions Based on Paired Data
ERIC Educational Resources Information Center
Bonett, Douglas G.; Price, Robert M.
2012-01-01
Adjusted Wald intervals for binomial proportions in one-sample and two-sample designs have been shown to perform about as well as the best available methods. The adjusted Wald intervals are easy to compute and have been incorporated into introductory statistics courses. An adjusted Wald interval for paired binomial proportions is proposed here and…
A Monte Carlo Risk Analysis of Life Cycle Cost Prediction.
1975-09-01
process which occurs with each FLU failure. With this in mind there is no alternative other than the binomial distribution. 24 GOR/SM/75D-6 With all of...Weibull distribution of failures as selected by user. For each failure of the ith FLU, the model then samples from the binomial distribution to deter- mine...which is sampled from the binomial . Neither of the two conditions for normality are met, i.e., that RTS Ie close to .5 and the number of samples close
Simulation on Poisson and negative binomial models of count road accident modeling
NASA Astrophysics Data System (ADS)
Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.
2016-11-01
Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.
Martínez-Ferrer, María Teresa; Ripollés, José Luís; Garcia-Marí, Ferran
2006-06-01
The spatial distribution of the citrus mealybug, Planococcus citri (Risso) (Homoptera: Pseudococcidae), was studied in citrus groves in northeastern Spain. Constant precision sampling plans were designed for all developmental stages of citrus mealybug under the fruit calyx, for late stages on fruit, and for females on trunks and main branches; more than 66, 286, and 101 data sets, respectively, were collected from nine commercial fields during 1992-1998. Dispersion parameters were determined using Taylor's power law, giving aggregated spatial patterns for citrus mealybug populations in three locations of the tree sampled. A significant relationship between the number of insects per organ and the percentage of occupied organs was established using either Wilson and Room's binomial model or Kono and Sugino's empirical formula. Constant precision (E = 0.25) sampling plans (i.e., enumerative plans) for estimating mean densities were developed using Green's equation and the two binomial models. For making management decisions, enumerative counts may be less labor-intensive than binomial sampling. Therefore, we recommend enumerative sampling plans for the use in an integrated pest management program in citrus. Required sample sizes for the range of population densities near current management thresholds, in the three plant locations calyx, fruit, and trunk were 50, 110-330, and 30, respectively. Binomial sampling, especially the empirical model, required a higher sample size to achieve equivalent levels of precision.
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2011-01-01
The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that for a minimum flaw size and all greater flaw sizes, there is 0.90 probability of detection with 95% confidence (90/95 POD). Directed design of experiments for probability of detection (DOEPOD) has been developed to provide an efficient and accurate methodology that yields estimates of POD and confidence bounds for both Hit-Miss or signal amplitude testing, where signal amplitudes are reduced to Hit-Miss by using a signal threshold Directed DOEPOD uses a nonparametric approach for the analysis or inspection data that does require any assumptions about the particular functional form of a POD function. The DOEPOD procedure identifies, for a given sample set whether or not the minimum requirement of 0.90 probability of detection with 95% confidence is demonstrated for a minimum flaw size and for all greater flaw sizes (90/95 POD). The DOEPOD procedures are sequentially executed in order to minimize the number of samples needed to demonstrate that there is a 90/95 POD lower confidence bound at a given flaw size and that the POD is monotonic for flaw sizes exceeding that 90/95 POD flaw size. The conservativeness of the DOEPOD methodology results is discussed. Validated guidelines for binomial estimation of POD for fracture critical inspection are established.
Aitken, C G
1999-07-01
It is thought that, in a consignment of discrete units, a certain proportion of the units contain illegal material. A sample of the consignment is to be inspected. Various methods for the determination of the sample size are compared. The consignment will be considered as a random sample from some super-population of units, a certain proportion of which contain drugs. For large consignments, a probability distribution, known as the beta distribution, for the proportion of the consignment which contains illegal material is obtained. This distribution is based on prior beliefs about the proportion. Under certain specific conditions the beta distribution gives the same numerical results as an approach based on the binomial distribution. The binomial distribution provides a probability for the number of units in a sample which contain illegal material, conditional on knowing the proportion of the consignment which contains illegal material. This is in contrast to the beta distribution which provides probabilities for the proportion of a consignment which contains illegal material, conditional on knowing the number of units in the sample which contain illegal material. The interpretation when the beta distribution is used is much more intuitively satisfactory. It is also much more flexible in its ability to cater for prior beliefs which may vary given the different circumstances of different crimes. For small consignments, a distribution, known as the beta-binomial distribution, for the number of units in the consignment which are found to contain illegal material, is obtained, based on prior beliefs about the number of units in the consignment which are thought to contain illegal material. As with the beta and binomial distributions for large samples, it is shown that, in certain specific conditions, the beta-binomial and hypergeometric distributions give the same numerical results. However, the beta-binomial distribution, as with the beta distribution, has a more intuitively satisfactory interpretation and greater flexibility. The beta and the beta-binomial distributions provide methods for the determination of the minimum sample size to be taken from a consignment in order to satisfy a certain criterion. The criterion requires the specification of a proportion and a probability.
On the p, q-binomial distribution and the Ising model
NASA Astrophysics Data System (ADS)
Lundow, P. H.; Rosengren, A.
2010-08-01
We employ p, q-binomial coefficients, a generalisation of the binomial coefficients, to describe the magnetisation distributions of the Ising model. For the complete graph this distribution corresponds exactly to the limit case p = q. We apply our investigation to the simple d-dimensional lattices for d = 1, 2, 3, 4, 5 and fit p, q-binomial distributions to our data, some of which are exact but most are sampled. For d = 1 and d = 5, the magnetisation distributions are remarkably well-fitted by p,q-binomial distributions. For d = 4 we are only slightly less successful, while for d = 2, 3 we see some deviations (with exceptions!) between the p, q-binomial and the Ising distribution. However, at certain temperatures near T c the statistical moments of the fitted distribution agree with the moments of the sampled data within the precision of sampling. We begin the paper by giving results of the behaviour of the p, q-distribution and its moment growth exponents given a certain parameterisation of p, q. Since the moment exponents are known for the Ising model (or at least approximately for d = 3) we can predict how p, q should behave and compare this to our measured p, q. The results speak in favour of the p, q-binomial distribution's correctness regarding its general behaviour in comparison to the Ising model. The full extent to which they correctly model the Ising distribution, however, is not settled.
Harrison, Xavier A
2015-01-01
Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels), I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed poorly when models contained <5 levels of the random intercept term, especially for estimating variance components, and this effect appeared independent of total sample size. These results suggest that OLRE are a useful tool for modelling overdispersion in Binomial data, but that they do not perform well in all circumstances and researchers should take care to verify the robustness of parameter estimates of OLRE models.
Binomial leap methods for simulating stochastic chemical kinetics.
Tian, Tianhai; Burrage, Kevin
2004-12-01
This paper discusses efficient simulation methods for stochastic chemical kinetics. Based on the tau-leap and midpoint tau-leap methods of Gillespie [D. T. Gillespie, J. Chem. Phys. 115, 1716 (2001)], binomial random variables are used in these leap methods rather than Poisson random variables. The motivation for this approach is to improve the efficiency of the Poisson leap methods by using larger stepsizes. Unlike Poisson random variables whose range of sample values is from zero to infinity, binomial random variables have a finite range of sample values. This probabilistic property has been used to restrict possible reaction numbers and to avoid negative molecular numbers in stochastic simulations when larger stepsize is used. In this approach a binomial random variable is defined for a single reaction channel in order to keep the reaction number of this channel below the numbers of molecules that undergo this reaction channel. A sampling technique is also designed for the total reaction number of a reactant species that undergoes two or more reaction channels. Samples for the total reaction number are not greater than the molecular number of this species. In addition, probability properties of the binomial random variables provide stepsize conditions for restricting reaction numbers in a chosen time interval. These stepsize conditions are important properties of robust leap control strategies. Numerical results indicate that the proposed binomial leap methods can be applied to a wide range of chemical reaction systems with very good accuracy and significant improvement on efficiency over existing approaches. (c) 2004 American Institute of Physics.
Dispersion and sampling of adult Dermacentor andersoni in rangeland in Western North America.
Rochon, K; Scoles, G A; Lysyk, T J
2012-03-01
A fixed precision sampling plan was developed for off-host populations of adult Rocky Mountain wood tick, Dermacentor andersoni (Stiles) based on data collected by dragging at 13 locations in Alberta, Canada; Washington; and Oregon. In total, 222 site-date combinations were sampled. Each site-date combination was considered a sample, and each sample ranged in size from 86 to 250 10 m2 quadrats. Analysis of simulated quadrats ranging in size from 10 to 50 m2 indicated that the most precise sample unit was the 10 m2 quadrat. Samples taken when abundance < 0.04 ticks per 10 m2 were more likely to not depart significantly from statistical randomness than samples taken when abundance was greater. Data were grouped into ten abundance classes and assessed for fit to the Poisson and negative binomial distributions. The Poisson distribution fit only data in abundance classes < 0.02 ticks per 10 m2, while the negative binomial distribution fit data from all abundance classes. A negative binomial distribution with common k = 0.3742 fit data in eight of the 10 abundance classes. Both the Taylor and Iwao mean-variance relationships were fit and used to predict sample sizes for a fixed level of precision. Sample sizes predicted using the Taylor model tended to underestimate actual sample sizes, while sample sizes estimated using the Iwao model tended to overestimate actual sample sizes. Using a negative binomial with common k provided estimates of required sample sizes closest to empirically calculated sample sizes.
Choosing a Transformation in Analyses of Insect Counts from Contagious Distributions with Low Means
W.D. Pepper; S.J. Zarnoch; G.L. DeBarr; P. de Groot; C.D. Tangren
1997-01-01
Guidelines based on computer simulation are suggested for choosing a transformation of insect counts from negative binomial distributions with low mean counts and high levels of contagion. Typical values and ranges of negative binomial model parameters were determined by fitting the model to data from 19 entomological field studies. Random sampling of negative binomial...
Use of the binomial distribution to predict impairment: application in a nonclinical sample.
Axelrod, Bradley N; Wall, Jacqueline R; Estes, Bradley W
2008-01-01
A mathematical model based on the binomial theory was developed to illustrate when abnormal score variations occur by chance in a multitest battery (Ingraham & Aiken, 1996). It has been successfully used as a comparison for obtained test scores in clinical samples, but not in nonclinical samples. In the current study, this model has been applied to demographically corrected scores on the Halstead-Reitan Neuropsychological Test Battery, obtained from a sample of 94 nonclinical college students. Results found that 15% of the sample had impairments suggested by the Halstead Impairment Index, using criteria established by Reitan and Wolfson (1993). In addition, one-half of the sample obtained impaired scores on one or two tests. These results were compared to that predicted by the binomial model and found to be consistent. The model therefore serves as a useful resource for clinicians considering the probability of impaired test performance.
Pedroza, Claudia; Truong, Van Thi Thanh
2017-11-02
Analyses of multicenter studies often need to account for center clustering to ensure valid inference. For binary outcomes, it is particularly challenging to properly adjust for center when the number of centers or total sample size is small, or when there are few events per center. Our objective was to evaluate the performance of generalized estimating equation (GEE) log-binomial and Poisson models, generalized linear mixed models (GLMMs) assuming binomial and Poisson distributions, and a Bayesian binomial GLMM to account for center effect in these scenarios. We conducted a simulation study with few centers (≤30) and 50 or fewer subjects per center, using both a randomized controlled trial and an observational study design to estimate relative risk. We compared the GEE and GLMM models with a log-binomial model without adjustment for clustering in terms of bias, root mean square error (RMSE), and coverage. For the Bayesian GLMM, we used informative neutral priors that are skeptical of large treatment effects that are almost never observed in studies of medical interventions. All frequentist methods exhibited little bias, and the RMSE was very similar across the models. The binomial GLMM had poor convergence rates, ranging from 27% to 85%, but performed well otherwise. The results show that both GEE models need to use small sample corrections for robust SEs to achieve proper coverage of 95% CIs. The Bayesian GLMM had similar convergence rates but resulted in slightly more biased estimates for the smallest sample sizes. However, it had the smallest RMSE and good coverage across all scenarios. These results were very similar for both study designs. For the analyses of multicenter studies with a binary outcome and few centers, we recommend adjustment for center with either a GEE log-binomial or Poisson model with appropriate small sample corrections or a Bayesian binomial GLMM with informative priors.
C-5A Cargo Deck Low-Frequency Vibration Environment
1975-02-01
SAMPLE VIBRATION CALCULATIONS 13 1. Normal Distribution 13 2. Binomial Distribution 15 IV CONCLUSIONS 17 -! V REFERENCES 18 t: FEiCENDIJJ PAGS 2LANKNOT...Calculation for Binomial Distribution 108 (Vertical Acceleration, Right Rear Cargo Deck) xi I. INTRODUCTION The availability of large transport...the end of taxi. These peaks could then be used directly to compile the probability of occurrence of specific values of acceleration using the binomial
Application of binomial-edited CPMG to shale characterization
Washburn, Kathryn E.; Birdwell, Justin E.
2014-01-01
Unconventional shale resources may contain a significant amount of hydrogen in organic solids such as kerogen, but it is not possible to directly detect these solids with many NMR systems. Binomial-edited pulse sequences capitalize on magnetization transfer between solids, semi-solids, and liquids to provide an indirect method of detecting solid organic materials in shales. When the organic solids can be directly measured, binomial-editing helps distinguish between different phases. We applied a binomial-edited CPMG pulse sequence to a range of natural and experimentally-altered shale samples. The most substantial signal loss is seen in shales rich in organic solids while fluids associated with inorganic pores seem essentially unaffected. This suggests that binomial-editing is a potential method for determining fluid locations, solid organic content, and kerogen–bitumen discrimination.
Kabaluk, J Todd; Binns, Michael R; Vernon, Robert S
2006-06-01
Counts of green peach aphid, Myzus persicae (Sulzer) (Hemiptera: Aphididae), in potato, Solanum tuberosum L., fields were used to evaluate the performance of the sampling plan from a pest management company. The counts were further used to develop a binomial sampling method, and both full count and binomial plans were evaluated using operating characteristic curves. Taylor's power law provided a good fit of the data (r2 = 0.95), with the relationship between the variance (s2) and mean (m) as ln(s2) = 1.81(+/- 0.02) + 1.55(+/- 0.01) ln(m). A binomial sampling method was developed using the empirical model ln(m) = c + dln(-ln(1 - P(T))), to which the data fit well for tally numbers (T) of 0, 1, 3, 5, 7, and 10. Although T = 3 was considered the most reasonable given its operating characteristics and presumed ease of classification above or below critical densities (i.e., action thresholds) of one and 10 M. persicae per leaf, the full count method is shown to be superior. The mean number of sample sites per field visit by the pest management company was 42 +/- 19, with more than one-half (54%) of the field visits involving sampling 31-50 sample sites, which was acceptable in the context of operating characteristic curves for a critical density of 10 M. persicae per leaf. Based on operating characteristics, actual sample sizes used by the pest management company can be reduced by at least 50%, on average, for a critical density of 10 M. persicae per leaf. For a critical density of one M. persicae per leaf used to avert the spread of potato leaf roll virus, sample sizes from 50 to 100 were considered more suitable.
Testing the non-unity of rate ratio under inverse sampling.
Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing
2007-08-01
Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
HYPERSAMP - HYPERGEOMETRIC ATTRIBUTE SAMPLING SYSTEM BASED ON RISK AND FRACTION DEFECTIVE
NASA Technical Reports Server (NTRS)
De, Salvo L. J.
1994-01-01
HYPERSAMP is a demonstration of an attribute sampling system developed to determine the minimum sample size required for any preselected value for consumer's risk and fraction of nonconforming. This statistical method can be used in place of MIL-STD-105E sampling plans when a minimum sample size is desirable, such as when tests are destructive or expensive. HYPERSAMP utilizes the Hypergeometric Distribution and can be used for any fraction nonconforming. The program employs an iterative technique that circumvents the obstacle presented by the factorial of a non-whole number. HYPERSAMP provides the required Hypergeometric sample size for any equivalent real number of nonconformances in the lot or batch under evaluation. Many currently used sampling systems, such as the MIL-STD-105E, utilize the Binomial or the Poisson equations as an estimate of the Hypergeometric when performing inspection by attributes. However, this is primarily because of the difficulty in calculation of the factorials required by the Hypergeometric. Sampling plans based on the Binomial or Poisson equations will result in the maximum sample size possible with the Hypergeometric. The difference in the sample sizes between the Poisson or Binomial and the Hypergeometric can be significant. For example, a lot size of 400 devices with an error rate of 1.0% and a confidence of 99% would require a sample size of 400 (all units would need to be inspected) for the Binomial sampling plan and only 273 for a Hypergeometric sampling plan. The Hypergeometric results in a savings of 127 units, a significant reduction in the required sample size. HYPERSAMP is a demonstration program and is limited to sampling plans with zero defectives in the sample (acceptance number of zero). Since it is only a demonstration program, the sample size determination is limited to sample sizes of 1500 or less. The Hypergeometric Attribute Sampling System demonstration code is a spreadsheet program written for IBM PC compatible computers running DOS and Lotus 1-2-3 or Quattro Pro. This program is distributed on a 5.25 inch 360K MS-DOS format diskette, and the program price includes documentation. This statistical method was developed in 1992.
Marginalized zero-inflated negative binomial regression with application to dental caries
Preisser, John S.; Das, Kalyan; Long, D. Leann; Divaris, Kimon
2015-01-01
The zero-inflated negative binomial regression model (ZINB) is often employed in diverse fields such as dentistry, health care utilization, highway safety, and medicine to examine relationships between exposures of interest and overdispersed count outcomes exhibiting many zeros. The regression coefficients of ZINB have latent class interpretations for a susceptible subpopulation at risk for the disease/condition under study with counts generated from a negative binomial distribution and for a non-susceptible subpopulation that provides only zero counts. The ZINB parameters, however, are not well-suited for estimating overall exposure effects, specifically, in quantifying the effect of an explanatory variable in the overall mixture population. In this paper, a marginalized zero-inflated negative binomial regression (MZINB) model for independent responses is proposed to model the population marginal mean count directly, providing straightforward inference for overall exposure effects based on maximum likelihood estimation. Through simulation studies, the finite sample performance of MZINB is compared to marginalized zero-inflated Poisson, Poisson, and negative binomial regression. The MZINB model is applied in the evaluation of a school-based fluoride mouthrinse program on dental caries in 677 children. PMID:26568034
A fast least-squares algorithm for population inference
2013-01-01
Background Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual’s genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. Results We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. Conclusions The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate. PMID:23343408
A fast least-squares algorithm for population inference.
Parry, R Mitchell; Wang, May D
2013-01-23
Population inference is an important problem in genetics used to remove population stratification in genome-wide association studies and to detect migration patterns or shared ancestry. An individual's genotype can be modeled as a probabilistic function of ancestral population memberships, Q, and the allele frequencies in those populations, P. The parameters, P and Q, of this binomial likelihood model can be inferred using slow sampling methods such as Markov Chain Monte Carlo methods or faster gradient based approaches such as sequential quadratic programming. This paper proposes a least-squares simplification of the binomial likelihood model motivated by a Euclidean interpretation of the genotype feature space. This results in a faster algorithm that easily incorporates the degree of admixture within the sample of individuals and improves estimates without requiring trial-and-error tuning. We show that the expected value of the least-squares solution across all possible genotype datasets is equal to the true solution when part of the problem has been solved, and that the variance of the solution approaches zero as its size increases. The Least-squares algorithm performs nearly as well as Admixture for these theoretical scenarios. We compare least-squares, Admixture, and FRAPPE for a variety of problem sizes and difficulties. For particularly hard problems with a large number of populations, small number of samples, or greater degree of admixture, least-squares performs better than the other methods. On simulated mixtures of real population allele frequencies from the HapMap project, Admixture estimates sparsely mixed individuals better than Least-squares. The least-squares approach, however, performs within 1.5% of the Admixture error. On individual genotypes from the HapMap project, Admixture and least-squares perform qualitatively similarly and within 1.2% of each other. Significantly, the least-squares approach nearly always converges 1.5- to 6-times faster. The computational advantage of the least-squares approach along with its good estimation performance warrants further research, especially for very large datasets. As problem sizes increase, the difference in estimation performance between all algorithms decreases. In addition, when prior information is known, the least-squares approach easily incorporates the expected degree of admixture to improve the estimate.
Identifiability in N-mixture models: a large-scale screening test with bird data.
Kéry, Marc
2018-02-01
Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.
Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen
2017-01-01
Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.
The gambler's fallacy in penalty shootouts.
Braun, Sebastian; Schmidt, Ulrich
2015-07-20
A well-known bias in subjective perceptions of chance is the gambler's fallacy: people typically believe that a streak generated by a series of independent random draws, such as a coin toss, becomes increasingly more likely to break when the streak becomes longer. In a fascinating study, Misirlisoy and Haggard analysed sequential behavior of kickers and goalkeepers in penalty shootouts. They report that goalkeepers are prone to the gambler's fallacy: after a series of three kicks in the same direction, goalkeepers are more likely to dive in the opposite direction at the next kick. Here we argue, first, that a binomial test is more appropriate for testing gambler's fallacy than the tests employed by Misirlisoy and Haggard, and second, that penalty shootouts may not be well-suited to analyze the gambler's fallacy. Using a binomial test, we neither find statistically significant evidence for gambler's fallacy in Misirlisoy and Haggard's original data, nor in extended data, nor in data from an idealised laboratory experiment that we ran to address the second point. In line with Misirlisoy and Haggard's original result, we do, however, find evidence for a systematic pattern of goalkeeper's behavior that kickers could exploit. Copyright © 2015 Elsevier Ltd. All rights reserved.
Exact tests using two correlated binomial variables in contemporary cancer clinical trials.
Yu, Jihnhee; Kepner, James L; Iyer, Renuka
2009-12-01
New therapy strategies for the treatment of cancer are rapidly emerging because of recent technology advances in genetics and molecular biology. Although newer targeted therapies can improve survival without measurable changes in tumor size, clinical trial conduct has remained nearly unchanged. When potentially efficacious therapies are tested, current clinical trial design and analysis methods may not be suitable for detecting therapeutic effects. We propose an exact method with respect to testing cytostatic cancer treatment using correlated bivariate binomial random variables to simultaneously assess two primary outcomes. The method is easy to implement. It does not increase the sample size over that of the univariate exact test and in most cases reduces the sample size required. Sample size calculations are provided for selected designs.
Bayesian inference for disease prevalence using negative binomial group testing
Pritchard, Nicholas A.; Tebbs, Joshua M.
2011-01-01
Group testing, also known as pooled testing, and inverse sampling are both widely used methods of data collection when the goal is to estimate a small proportion. Taking a Bayesian approach, we consider the new problem of estimating disease prevalence from group testing when inverse (negative binomial) sampling is used. Using different distributions to incorporate prior knowledge of disease incidence and different loss functions, we derive closed form expressions for posterior distributions and resulting point and credible interval estimators. We then evaluate our new estimators, on Bayesian and classical grounds, and apply our methods to a West Nile Virus data set. PMID:21259308
O’Donnell, Katherine M.; Thompson, Frank R.; Semlitsch, Raymond D.
2015-01-01
Detectability of individual animals is highly variable and nearly always < 1; imperfect detection must be accounted for to reliably estimate population sizes and trends. Hierarchical models can simultaneously estimate abundance and effective detection probability, but there are several different mechanisms that cause variation in detectability. Neglecting temporary emigration can lead to biased population estimates because availability and conditional detection probability are confounded. In this study, we extend previous hierarchical binomial mixture models to account for multiple sources of variation in detectability. The state process of the hierarchical model describes ecological mechanisms that generate spatial and temporal patterns in abundance, while the observation model accounts for the imperfect nature of counting individuals due to temporary emigration and false absences. We illustrate our model’s potential advantages, including the allowance of temporary emigration between sampling periods, with a case study of southern red-backed salamanders Plethodon serratus. We fit our model and a standard binomial mixture model to counts of terrestrial salamanders surveyed at 40 sites during 3–5 surveys each spring and fall 2010–2012. Our models generated similar parameter estimates to standard binomial mixture models. Aspect was the best predictor of salamander abundance in our case study; abundance increased as aspect became more northeasterly. Increased time-since-rainfall strongly decreased salamander surface activity (i.e. availability for sampling), while higher amounts of woody cover objects and rocks increased conditional detection probability (i.e. probability of capture, given an animal is exposed to sampling). By explicitly accounting for both components of detectability, we increased congruence between our statistical modeling and our ecological understanding of the system. We stress the importance of choosing survey locations and protocols that maximize species availability and conditional detection probability to increase population parameter estimate reliability. PMID:25775182
van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels
2012-01-01
This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use RACE/A to model data from two variants of a picture-word interference task in a psychological refractory period design. These models will demonstrate how RACE/A enables interactions between sequential sampling and long-term declarative learning, and between sequential sampling and task control. In a traditional sequential sampling model, the onset of the process within the task is unclear, as is the number of sampling processes. RACE/A provides a theoretical basis for estimating the onset of sequential sampling processes during task execution and allows for easy modeling of multiple sequential sampling processes within a task. Copyright © 2011 Cognitive Science Society, Inc.
Cao, Qingqing; Wu, Zhenqiang; Sun, Ying; Wang, Tiezhu; Han, Tengwei; Gu, Chaomei; Sun, Yehuan
2011-11-01
To Eexplore the application of negative binomial regression and modified Poisson regression analysis in analyzing the influential factors for injury frequency and the risk factors leading to the increase of injury frequency. 2917 primary and secondary school students were selected from Hefei by cluster random sampling method and surveyed by questionnaire. The data on the count event-based injuries used to fitted modified Poisson regression and negative binomial regression model. The risk factors incurring the increase of unintentional injury frequency for juvenile students was explored, so as to probe the efficiency of these two models in studying the influential factors for injury frequency. The Poisson model existed over-dispersion (P < 0.0001) based on testing by the Lagrangemultiplier. Therefore, the over-dispersion dispersed data using a modified Poisson regression and negative binomial regression model, was fitted better. respectively. Both showed that male gender, younger age, father working outside of the hometown, the level of the guardian being above junior high school and smoking might be the results of higher injury frequencies. On a tendency of clustered frequency data on injury event, both the modified Poisson regression analysis and negative binomial regression analysis can be used. However, based on our data, the modified Poisson regression fitted better and this model could give a more accurate interpretation of relevant factors affecting the frequency of injury.
Liu, Lian; Zhang, Shao-Wu; Huang, Yufei; Meng, Jia
2017-08-31
As a newly emerged research area, RNA epigenetics has drawn increasing attention recently for the participation of RNA methylation and other modifications in a number of crucial biological processes. Thanks to high throughput sequencing techniques, such as, MeRIP-Seq, transcriptome-wide RNA methylation profile is now available in the form of count-based data, with which it is often of interests to study the dynamics at epitranscriptomic layer. However, the sample size of RNA methylation experiment is usually very small due to its costs; and additionally, there usually exist a large number of genes whose methylation level cannot be accurately estimated due to their low expression level, making differential RNA methylation analysis a difficult task. We present QNB, a statistical approach for differential RNA methylation analysis with count-based small-sample sequencing data. Compared with previous approaches such as DRME model based on a statistical test covering the IP samples only with 2 negative binomial distributions, QNB is based on 4 independent negative binomial distributions with their variances and means linked by local regressions, and in the way, the input control samples are also properly taken care of. In addition, different from DRME approach, which relies only the input control sample only for estimating the background, QNB uses a more robust estimator for gene expression by combining information from both input and IP samples, which could largely improve the testing performance for very lowly expressed genes. QNB showed improved performance on both simulated and real MeRIP-Seq datasets when compared with competing algorithms. And the QNB model is also applicable to other datasets related RNA modifications, including but not limited to RNA bisulfite sequencing, m 1 A-Seq, Par-CLIP, RIP-Seq, etc.
Phase II design with sequential testing of hypotheses within each stage.
Poulopoulou, Stavroula; Karlis, Dimitris; Yiannoutsos, Constantin T; Dafni, Urania
2014-01-01
The main goal of a Phase II clinical trial is to decide, whether a particular therapeutic regimen is effective enough to warrant further study. The hypothesis tested by Fleming's Phase II design (Fleming, 1982) is [Formula: see text] versus [Formula: see text], with level [Formula: see text] and with a power [Formula: see text] at [Formula: see text], where [Formula: see text] is chosen to represent the response probability achievable with standard treatment and [Formula: see text] is chosen such that the difference [Formula: see text] represents a targeted improvement with the new treatment. This hypothesis creates a misinterpretation mainly among clinicians that rejection of the null hypothesis is tantamount to accepting the alternative, and vice versa. As mentioned by Storer (1992), this introduces ambiguity in the evaluation of type I and II errors and the choice of the appropriate decision at the end of the study. Instead of testing this hypothesis, an alternative class of designs is proposed in which two hypotheses are tested sequentially. The hypothesis [Formula: see text] versus [Formula: see text] is tested first. If this null hypothesis is rejected, the hypothesis [Formula: see text] versus [Formula: see text] is tested next, in order to examine whether the therapy is effective enough to consider further testing in a Phase III study. For the derivation of the proposed design the exact binomial distribution is used to calculate the decision cut-points. The optimal design parameters are chosen, so as to minimize the average sample number (ASN) under specific upper bounds for error levels. The optimal values for the design were found using a simulated annealing method.
Performance review using sequential sampling and a practice computer.
Difford, F
1988-06-01
The use of sequential sample analysis for repeated performance review is described with examples from several areas of practice. The value of a practice computer in providing a random sample from a complete population, evaluating the parameters of a sequential procedure, and producing a structured worksheet is discussed. It is suggested that sequential analysis has advantages over conventional sampling in the area of performance review in general practice.
Crans, Gerald G; Shuster, Jonathan J
2008-08-15
The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd
A comparison of methods for the analysis of binomial clustered outcomes in behavioral research.
Ferrari, Alberto; Comelli, Mario
2016-12-01
In behavioral research, data consisting of a per-subject proportion of "successes" and "failures" over a finite number of trials often arise. This clustered binary data are usually non-normally distributed, which can distort inference if the usual general linear model is applied and sample size is small. A number of more advanced methods is available, but they are often technically challenging and a comparative assessment of their performances in behavioral setups has not been performed. We studied the performances of some methods applicable to the analysis of proportions; namely linear regression, Poisson regression, beta-binomial regression and Generalized Linear Mixed Models (GLMMs). We report on a simulation study evaluating power and Type I error rate of these models in hypothetical scenarios met by behavioral researchers; plus, we describe results from the application of these methods on data from real experiments. Our results show that, while GLMMs are powerful instruments for the analysis of clustered binary outcomes, beta-binomial regression can outperform them in a range of scenarios. Linear regression gave results consistent with the nominal level of significance, but was overall less powerful. Poisson regression, instead, mostly led to anticonservative inference. GLMMs and beta-binomial regression are generally more powerful than linear regression; yet linear regression is robust to model misspecification in some conditions, whereas Poisson regression suffers heavily from violations of the assumptions when used to model proportion data. We conclude providing directions to behavioral scientists dealing with clustered binary data and small sample sizes. Copyright © 2016 Elsevier B.V. All rights reserved.
Emperical Tests of Acceptance Sampling Plans
NASA Technical Reports Server (NTRS)
White, K. Preston, Jr.; Johnson, Kenneth L.
2012-01-01
Acceptance sampling is a quality control procedure applied as an alternative to 100% inspection. A random sample of items is drawn from a lot to determine the fraction of items which have a required quality characteristic. Both the number of items to be inspected and the criterion for determining conformance of the lot to the requirement are given by an appropriate sampling plan with specified risks of Type I and Type II sampling errors. In this paper, we present the results of empirical tests of the accuracy of selected sampling plans reported in the literature. These plans are for measureable quality characteristics which are known have either binomial, exponential, normal, gamma, Weibull, inverse Gaussian, or Poisson distributions. In the main, results support the accepted wisdom that variables acceptance plans are superior to attributes (binomial) acceptance plans, in the sense that these provide comparable protection against risks at reduced sampling cost. For the Gaussian and Weibull plans, however, there are ranges of the shape parameters for which the required sample sizes are in fact larger than the corresponding attributes plans, dramatically so for instances of large skew. Tests further confirm that the published inverse-Gaussian (IG) plan is flawed, as reported by White and Johnson (2011).
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
USDA-ARS?s Scientific Manuscript database
Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...
Zero-truncated negative binomial - Erlang distribution
NASA Astrophysics Data System (ADS)
Bodhisuwan, Winai; Pudprommarat, Chookait; Bodhisuwan, Rujira; Saothayanun, Luckhana
2017-11-01
The zero-truncated negative binomial-Erlang distribution is introduced. It is developed from negative binomial-Erlang distribution. In this work, the probability mass function is derived and some properties are included. The parameters of the zero-truncated negative binomial-Erlang distribution are estimated by using the maximum likelihood estimation. Finally, the proposed distribution is applied to real data, the number of methamphetamine in the Bangkok, Thailand. Based on the results, it shows that the zero-truncated negative binomial-Erlang distribution provided a better fit than the zero-truncated Poisson, zero-truncated negative binomial, zero-truncated generalized negative-binomial and zero-truncated Poisson-Lindley distributions for this data.
Belitz, Kenneth; Jurgens, Bryant C.; Landon, Matthew K.; Fram, Miranda S.; Johnson, Tyler D.
2010-01-01
The proportion of an aquifer with constituent concentrations above a specified threshold (high concentrations) is taken as a nondimensional measure of regional scale water quality. If computed on the basis of area, it can be referred to as the aquifer scale proportion. A spatially unbiased estimate of aquifer scale proportion and a confidence interval for that estimate are obtained through the use of equal area grids and the binomial distribution. Traditionally, the confidence interval for a binomial proportion is computed using either the standard interval or the exact interval. Research from the statistics literature has shown that the standard interval should not be used and that the exact interval is overly conservative. On the basis of coverage probability and interval width, the Jeffreys interval is preferred. If more than one sample per cell is available, cell declustering is used to estimate the aquifer scale proportion, and Kish's design effect may be useful for estimating an effective number of samples. The binomial distribution is also used to quantify the adequacy of a grid with a given number of cells for identifying a small target, defined as a constituent that is present at high concentrations in a small proportion of the aquifer. Case studies illustrate a consistency between approaches that use one well per grid cell and many wells per cell. The methods presented in this paper provide a quantitative basis for designing a sampling program and for utilizing existing data.
A Bayesian Framework for Reliability Analysis of Spacecraft Deployments
NASA Technical Reports Server (NTRS)
Evans, John W.; Gallo, Luis; Kaminsky, Mark
2012-01-01
Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a two stage sequential Bayesian framework for reliability estimation of spacecraft deployment was developed for this purpose. This process was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the Optical Telescope Element. Initially, detailed studies of NASA deployment history, "heritage information", were conducted, extending over 45 years of spacecraft launches. This information was then coupled to a non-informative prior and a binomial likelihood function to create a posterior distribution for deployments of various subsystems uSing Monte Carlo Markov Chain sampling. Select distributions were then coupled to a subsequent analysis, using test data and anomaly occurrences on successive ground test deployments of scale model test articles of JWST hardware, to update the NASA heritage data. This allowed for a realistic prediction for the reliability of the complex Sunshield deployment, with credibility limits, within this two stage Bayesian framework.
Modeling abundance using multinomial N-mixture models
Royle, Andy
2016-01-01
Multinomial N-mixture models are a generalization of the binomial N-mixture models described in Chapter 6 to allow for more complex and informative sampling protocols beyond simple counts. Many commonly used protocols such as multiple observer sampling, removal sampling, and capture-recapture produce a multivariate count frequency that has a multinomial distribution and for which multinomial N-mixture models can be developed. Such protocols typically result in more precise estimates than binomial mixture models because they provide direct information about parameters of the observation process. We demonstrate the analysis of these models in BUGS using several distinct formulations that afford great flexibility in the types of models that can be developed, and we demonstrate likelihood analysis using the unmarked package. Spatially stratified capture-recapture models are one class of models that fall into the multinomial N-mixture framework, and we discuss analysis of stratified versions of classical models such as model Mb, Mh and other classes of models that are only possible to describe within the multinomial N-mixture framework.
Ultrasound: a subexploited tool for sample preparation in metabolomics.
Luque de Castro, M D; Delgado-Povedano, M M
2014-01-02
Metabolomics, one of the most recently emerged "omics", has taken advantage of ultrasound (US) to improve sample preparation (SP) steps. The metabolomics-US assisted SP step binomial has experienced a dissimilar development that has depended on the area (vegetal or animal) and the SP step. Thus, vegetal metabolomics and US assisted leaching has received the greater attention (encompassing subdisciplines such as metallomics, xenometabolomics and, mainly, lipidomics), but also liquid-liquid extraction and (bio)chemical reactions in metabolomics have taken advantage of US energy. Also clinical and animal samples have benefited from US assisted SP in metabolomics studies but in a lesser extension. The main effects of US have been shortening of the time required for the given step, and/or increase of its efficiency or availability for automation; nevertheless, attention paid to potential degradation caused by US has been scant or nil. Achievements and weak points of the metabolomics-US assisted SP step binomial are discussed and possible solutions to the present shortcomings are exposed. Copyright © 2013 Elsevier B.V. All rights reserved.
One-sided truncated sequential t-test: application to natural resource sampling
Gary W. Fowler; William G. O' Regan
1974-01-01
A new procedure for constructing one-sided truncated sequential t-tests and its application to natural resource sampling are described. Monte Carlo procedures were used to develop a series of one-sided truncated sequential t-tests and the associated approximations to the operating characteristic and average sample number functions. Different truncation points and...
Selecting a distributional assumption for modelling relative densities of benthic macroinvertebrates
Gray, B.R.
2005-01-01
The selection of a distributional assumption suitable for modelling macroinvertebrate density data is typically challenging. Macroinvertebrate data often exhibit substantially larger variances than expected under a standard count assumption, that of the Poisson distribution. Such overdispersion may derive from multiple sources, including heterogeneity of habitat (historically and spatially), differing life histories for organisms collected within a single collection in space and time, and autocorrelation. Taken to extreme, heterogeneity of habitat may be argued to explain the frequent large proportions of zero observations in macroinvertebrate data. Sampling locations may consist of habitats defined qualitatively as either suitable or unsuitable. The former category may yield random or stochastic zeroes and the latter structural zeroes. Heterogeneity among counts may be accommodated by treating the count mean itself as a random variable, while extra zeroes may be accommodated using zero-modified count assumptions, including zero-inflated and two-stage (or hurdle) approaches. These and linear assumptions (following log- and square root-transformations) were evaluated using 9 years of mayfly density data from a 52 km, ninth-order reach of the Upper Mississippi River (n = 959). The data exhibited substantial overdispersion relative to that expected under a Poisson assumption (i.e. variance:mean ratio = 23 ??? 1), and 43% of the sampling locations yielded zero mayflies. Based on the Akaike Information Criterion (AIC), count models were improved most by treating the count mean as a random variable (via a Poisson-gamma distributional assumption) and secondarily by zero modification (i.e. improvements in AIC values = 9184 units and 47-48 units, respectively). Zeroes were underestimated by the Poisson, log-transform and square root-transform models, slightly by the standard negative binomial model but not by the zero-modified models (61%, 24%, 32%, 7%, and 0%, respectively). However, the zero-modified Poisson models underestimated small counts (1 ??? y ??? 4) and overestimated intermediate counts (7 ??? y ??? 23). Counts greater than zero were estimated well by zero-modified negative binomial models, while counts greater than one were also estimated well by the standard negative binomial model. Based on AIC and percent zero estimation criteria, the two-stage and zero-inflated models performed similarly. The above inferences were largely confirmed when the models were used to predict values from a separate, evaluation data set (n = 110). An exception was that, using the evaluation data set, the standard negative binomial model appeared superior to its zero-modified counterparts using the AIC (but not percent zero criteria). This and other evidence suggest that a negative binomial distributional assumption should be routinely considered when modelling benthic macroinvertebrate data from low flow environments. Whether negative binomial models should themselves be routinely examined for extra zeroes requires, from a statistical perspective, more investigation. However, this question may best be answered by ecological arguments that may be specific to the sampled species and locations. ?? 2004 Elsevier B.V. All rights reserved.
An adaptive two-stage sequential design for sampling rare and clustered populations
Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.
2008-01-01
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.
Ashley, Kevin; Applegate, Gregory T; Marcy, A Dale; Drake, Pamela L; Pierce, Paul A; Carabin, Nathalie; Demange, Martine
2009-02-01
Because toxicities may differ for Cr(VI) compounds of varying solubility, some countries and organizations have promulgated different occupational exposure limits (OELs) for soluble and insoluble hexavalent chromium (Cr(VI)) compounds, and analytical methods are needed to determine these species in workplace air samples. To address this need, international standard methods ASTM D6832 and ISO 16740 have been published that describe sequential extraction techniques for soluble and insoluble Cr(VI) in samples collected from occupational settings. However, no published performance data were previously available for these Cr(VI) sequential extraction procedures. In this work, the sequential extraction methods outlined in the relevant international standards were investigated. The procedures tested involved the use of either deionized water or an ammonium sulfate/ammonium hydroxide buffer solution to target soluble Cr(VI) species. This was followed by extraction in a sodium carbonate/sodium hydroxide buffer solution to dissolve insoluble Cr(VI) compounds. Three-step sequential extraction with (1) water, (2) sulfate buffer and (3) carbonate buffer was also investigated. Sequential extractions were carried out on spiked samples of soluble, sparingly soluble and insoluble Cr(VI) compounds, and analyses were then generally carried out by using the diphenylcarbazide method. Similar experiments were performed on paint pigment samples and on airborne particulate filter samples collected from stainless steel welding. Potential interferences from soluble and insoluble Cr(III) compounds, as well as from Fe(II), were investigated. Interferences from Cr(III) species were generally absent, while the presence of Fe(II) resulted in low Cr(VI) recoveries. Two-step sequential extraction of spiked samples with (first) either water or sulfate buffer, and then carbonate buffer, yielded quantitative recoveries of soluble Cr(VI) and insoluble Cr(VI), respectively. Three-step sequential extraction gave excessively high recoveries of soluble Cr(VI), low recoveries of sparingly soluble Cr(VI), and quantitative recoveries of insoluble Cr(VI). Experiments on paint pigment samples using two-step extraction with water and carbonate buffer yielded varying percentages of relative fractions of soluble and insoluble Cr(VI). Sequential extractions of stainless steel welding fume air filter samples demonstrated the predominance of soluble Cr(VI) compounds in such samples. The performance data obtained in this work support the Cr(VI) sequential extraction procedures described in the international standards.
Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.
2013-01-01
Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
ERIC Educational Resources Information Center
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
Assessing Trauma, Substance Abuse, and Mental Health in a Sample of Homeless Men
ERIC Educational Resources Information Center
Kim, Mimi M.; Ford, Julian D.; Howard, Daniel L.; Bradford, Daniel W.
2010-01-01
This study examined the impact of physical and sexual trauma on a sample of 239 homeless men. Study participants completed a self-administered survey that collected data on demographics, exposure to psychological trauma, physical health and mental health problems, and substance use or misuse. Binomial logistic regression analyses were used to…
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
Distinguishing between Binomial, Hypergeometric and Negative Binomial Distributions
ERIC Educational Resources Information Center
Wroughton, Jacqueline; Cole, Tarah
2013-01-01
Recognizing the differences between three discrete distributions (Binomial, Hypergeometric and Negative Binomial) can be challenging for students. We present an activity designed to help students differentiate among these distributions. In addition, we present assessment results in the form of pre- and post-tests that were designed to assess the…
Library Book Circulation and the Beta-Binomial Distribution.
ERIC Educational Resources Information Center
Gelman, E.; Sichel, H. S.
1987-01-01
Argues that library book circulation is a binomial rather than a Poisson process, and that individual book popularities are continuous beta distributions. Three examples demonstrate the superiority of beta over negative binomial distribution, and it is suggested that a bivariate-binomial process would be helpful in predicting future book…
Distribution pattern of phthirapterans infesting certain common Indian birds.
Saxena, A K; Kumar, Sandeep; Gupta, Nidhi; Mitra, J D; Ali, S A; Srivastava, Roshni
2007-08-01
The prevalence and frequency distribution patterns of 10 phthirapteran species infesting house sparrows, Indian parakeets, common mynas, and white breasted kingfishers were recorded in the district of Rampur, India, during 2004-05. The sample mean abundances, mean intensities, range of infestations, variance to mean ratios, values of the exponent of the negative binomial distribution, and the indices of discrepancy were also computed. Frequency distribution patterns of all phthirapteran species were skewed, but the observed frequencies did not correspond to the negative binomial distribution. Thus, adult-nymph ratios varied in different species from 1:0.53 to 1:1.25. Sex ratios of different phthirapteran species ranged from 1:1.10 to 1:1.65 and were female biased.
Accident prediction model for public highway-rail grade crossings.
Lu, Pan; Tolliver, Denver
2016-05-01
Considerable research has focused on roadway accident frequency analysis, but relatively little research has examined safety evaluation at highway-rail grade crossings. Highway-rail grade crossings are critical spatial locations of utmost importance for transportation safety because traffic crashes at highway-rail grade crossings are often catastrophic with serious consequences. The Poisson regression model has been employed to analyze vehicle accident frequency as a good starting point for many years. The most commonly applied variations of Poisson including negative binomial, and zero-inflated Poisson. These models are used to deal with common crash data issues such as over-dispersion (sample variance is larger than the sample mean) and preponderance of zeros (low sample mean and small sample size). On rare occasions traffic crash data have been shown to be under-dispersed (sample variance is smaller than the sample mean) and traditional distributions such as Poisson or negative binomial cannot handle under-dispersion well. The objective of this study is to investigate and compare various alternate highway-rail grade crossing accident frequency models that can handle the under-dispersion issue. The contributions of the paper are two-fold: (1) application of probability models to deal with under-dispersion issues and (2) obtain insights regarding to vehicle crashes at public highway-rail grade crossings. Copyright © 2016 Elsevier Ltd. All rights reserved.
Estimating relative risks for common outcome using PROC NLP.
Yu, Binbing; Wang, Zhuoqiao
2008-05-01
In cross-sectional or cohort studies with binary outcomes, it is biologically interpretable and of interest to estimate the relative risk or prevalence ratio, especially when the response rates are not rare. Several methods have been used to estimate the relative risk, among which the log-binomial models yield the maximum likelihood estimate (MLE) of the parameters. Because of restrictions on the parameter space, the log-binomial models often run into convergence problems. Some remedies, e.g., the Poisson and Cox regressions, have been proposed. However, these methods may give out-of-bound predicted response probabilities. In this paper, a new computation method using the SAS Nonlinear Programming (NLP) procedure is proposed to find the MLEs. The proposed NLP method was compared to the COPY method, a modified method to fit the log-binomial model. Issues in the implementation are discussed. For illustration, both methods were applied to data on the prevalence of microalbuminuria (micro-protein leakage into urine) for kidney disease patients from the Diabetes Control and Complications Trial. The sample SAS macro for calculating relative risk is provided in the appendix.
Estimation of Multinomial Probabilities.
1978-11-01
1971) and Alam (1978) have shown that the maximum likelihood estimator is admissible with respect to the quadratic loss. Steinhaus (1957) and Trybula...appear). Johnson, B. Mck. (1971). On admissible estimators for certain fixed sample binomial populations. Ann. Math. Statist. 92, 1579-1587. Steinhaus , H
Martin, Julien; Royle, J. Andrew; MacKenzie, Darryl I.; Edwards, Holly H.; Kery, Marc; Gardner, Beth
2011-01-01
Summary 1. Binomial mixture models use repeated count data to estimate abundance. They are becoming increasingly popular because they provide a simple and cost-effective way to account for imperfect detection. However, these models assume that individuals are detected independently of each other. This assumption may often be violated in the field. For instance, manatees (Trichechus manatus latirostris) may surface in turbid water (i.e. become available for detection during aerial surveys) in a correlated manner (i.e. in groups). However, correlated behaviour, affecting the non-independence of individual detections, may also be relevant in other systems (e.g. correlated patterns of singing in birds and amphibians). 2. We extend binomial mixture models to account for correlated behaviour and therefore to account for non-independent detection of individuals. We simulated correlated behaviour using beta-binomial random variables. Our approach can be used to simultaneously estimate abundance, detection probability and a correlation parameter. 3. Fitting binomial mixture models to data that followed a beta-binomial distribution resulted in an overestimation of abundance even for moderate levels of correlation. In contrast, the beta-binomial mixture model performed considerably better in our simulation scenarios. We also present a goodness-of-fit procedure to evaluate the fit of beta-binomial mixture models. 4. We illustrate our approach by fitting both binomial and beta-binomial mixture models to aerial survey data of manatees in Florida. We found that the binomial mixture model did not fit the data, whereas there was no evidence of lack of fit for the beta-binomial mixture model. This example helps illustrate the importance of using simulations and assessing goodness-of-fit when analysing ecological data with N-mixture models. Indeed, both the simulations and the goodness-of-fit procedure highlighted the limitations of the standard binomial mixture model for aerial manatee surveys. 5. Overestimation of abundance by binomial mixture models owing to non-independent detections is problematic for ecological studies, but also for conservation. For example, in the case of endangered species, it could lead to inappropriate management decisions, such as downlisting. These issues will be increasingly relevant as more ecologists apply flexible N-mixture models to ecological data.
NASA Astrophysics Data System (ADS)
Žukovič, Milan; Hristopulos, Dionissios T.
2009-02-01
A current problem of practical significance is how to analyze large, spatially distributed, environmental data sets. The problem is more challenging for variables that follow non-Gaussian distributions. We show by means of numerical simulations that the spatial correlations between variables can be captured by interactions between 'spins'. The spins represent multilevel discretizations of environmental variables with respect to a number of pre-defined thresholds. The spatial dependence between the 'spins' is imposed by means of short-range interactions. We present two approaches, inspired by the Ising and Potts models, that generate conditional simulations of spatially distributed variables from samples with missing data. Currently, the sampling and simulation points are assumed to be at the nodes of a regular grid. The conditional simulations of the 'spin system' are forced to respect locally the sample values and the system statistics globally. The second constraint is enforced by minimizing a cost function representing the deviation between normalized correlation energies of the simulated and the sample distributions. In the approach based on the Nc-state Potts model, each point is assigned to one of Nc classes. The interactions involve all the points simultaneously. In the Ising model approach, a sequential simulation scheme is used: the discretization at each simulation level is binomial (i.e., ± 1). Information propagates from lower to higher levels as the simulation proceeds. We compare the two approaches in terms of their ability to reproduce the target statistics (e.g., the histogram and the variogram of the sample distribution), to predict data at unsampled locations, as well as in terms of their computational complexity. The comparison is based on a non-Gaussian data set (derived from a digital elevation model of the Walker Lake area, Nevada, USA). We discuss the impact of relevant simulation parameters, such as the domain size, the number of discretization levels, and the initial conditions.
Modeling avian abundance from replicated counts using binomial mixture models
Kery, Marc; Royle, J. Andrew; Schmid, Hans
2005-01-01
Abundance estimation in ecology is usually accomplished by capture–recapture, removal, or distance sampling methods. These may be hard to implement at large spatial scales. In contrast, binomial mixture models enable abundance estimation without individual identification, based simply on temporally and spatially replicated counts. Here, we evaluate mixture models using data from the national breeding bird monitoring program in Switzerland, where some 250 1-km2 quadrats are surveyed using the territory mapping method three times during each breeding season. We chose eight species with contrasting distribution (wide–narrow), abundance (high–low), and detectability (easy–difficult). Abundance was modeled as a random effect with a Poisson or negative binomial distribution, with mean affected by forest cover, elevation, and route length. Detectability was a logit-linear function of survey date, survey date-by-elevation, and sampling effort (time per transect unit). Resulting covariate effects and parameter estimates were consistent with expectations. Detectability per territory (for three surveys) ranged from 0.66 to 0.94 (mean 0.84) for easy species, and from 0.16 to 0.83 (mean 0.53) for difficult species, depended on survey effort for two easy and all four difficult species, and changed seasonally for three easy and three difficult species. Abundance was positively related to route length in three high-abundance and one low-abundance (one easy and three difficult) species, and increased with forest cover in five forest species, decreased for two nonforest species, and was unaffected for a generalist species. Abundance estimates under the most parsimonious mixture models were between 1.1 and 8.9 (median 1.8) times greater than estimates based on territory mapping; hence, three surveys were insufficient to detect all territories for each species. We conclude that binomial mixture models are an important new approach for estimating abundance corrected for detectability when only repeated-count data are available. Future developments envisioned include estimation of trend, occupancy, and total regional abundance.
Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo
2018-02-01
The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2018-01-01
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach, and has several attractive features compared to the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, since the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. PMID:26303591
Van der Heyden, H; Dutilleul, P; Brodeur, L; Carisse, O
2014-06-01
Spatial distribution of single-nucleotide polymorphisms (SNPs) related to fungicide resistance was studied for Botrytis cinerea populations in vineyards and for B. squamosa populations in onion fields. Heterogeneity in this distribution was characterized by performing geostatistical analyses based on semivariograms and through the fitting of discrete probability distributions. Two SNPs known to be responsible for boscalid resistance (H272R and H272Y), both located on the B subunit of the succinate dehydrogenase gene, and one SNP known to be responsible for dicarboximide resistance (I365S) were chosen for B. cinerea in grape. For B. squamosa in onion, one SNP responsible for dicarboximide resistance (I365S homologous) was chosen. One onion field was sampled in 2009 and another one was sampled in 2010 for B. squamosa, and two vineyards were sampled in 2011 for B. cinerea, for a total of four sampled sites. Cluster sampling was carried on a 10-by-10 grid, each of the 100 nodes being the center of a 10-by-10-m quadrat. In each quadrat, 10 samples were collected and analyzed by restriction fragment length polymorphism polymerase chain reaction (PCR) or allele specific PCR. Mean SNP incidence varied from 16 to 68%, with an overall mean incidence of 43%. In the geostatistical analyses, omnidirectional variograms showed spatial autocorrelation characterized by ranges of 21 to 1 m. Various levels of anisotropy were detected, however, with variograms computed in four directions (at 0°, 45°, 90°, and 135° from the within-row direction used as reference), indicating that spatial autocorrelation was prevalent or characterized by a longer range in one direction. For all eight data sets, the β-binomial distribution was found to fit the data better than the binomial distribution. This indicates local aggregation of fungicide resistance among sampling units, as supported by estimates of the parameter θ of the β-binomial distribution of 0.09 to 0.23 (overall median value = 0.20). On the basis of the observed spatial distribution patterns of SNP incidence, sampling curves were computed for different levels of reliability, emphasizing the importance of sample size for the detection of mutation incidence below the risk threshold for control failure.
Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates
Bartroff, Jay; Song, Jinlin
2014-01-01
This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948
Spatial Dependence and Sampling of Phytoseiid Populations on Hass Avocados in Southern California.
Lara, Jesús R; Amrich, Ruth; Saremi, Naseem T; Hoddle, Mark S
2016-04-22
Research on phytoseiid mites has been critical for developing an effective biocontrol strategy for suppressing Oligonchus perseae Tuttle, Baker, and Abatiello (Acari: Tetranychidae) in California avocado orchards. However, basic understanding of the spatial ecology of natural populations of phytoseiids in relation to O. perseae infestations and the validation of research-based strategies for assessing densities of these predators has been limited. To address these shortcomings, cross-sectional and longitudinal observations consisting of >3,000 phytoseiids and 500,000 O. perseae counted on 11,341 leaves were collected across 10 avocado orchards during a 10-yr period. Subsets of these data were analyzed statistically to characterize the spatial distribution of phytoseiids in avocado orchards and to evaluate the merits of developing binomial and enumerative sampling strategies for these predators. Spatial correlation of phytoseiids between trees was detected at one site, and a strong association of phytoseiids with elevated O. perseae densities was detected at four sites. Sampling simulations revealed that enumeration-based sampling performed better than binomial sampling for estimating phytoseiid densities. The ecological implications of these findings and potential for developing a custom sampling plan to estimate densities of phytoseiids inhabiting sampled trees in avocado orchards in California are discussed. © The Authors 2016. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Design and analysis of three-arm trials with negative binomially distributed endpoints.
Mütze, Tobias; Munk, Axel; Friede, Tim
2016-02-20
A three-arm clinical trial design with an experimental treatment, an active control, and a placebo control, commonly referred to as the gold standard design, enables testing of non-inferiority or superiority of the experimental treatment compared with the active control. In this paper, we propose methods for designing and analyzing three-arm trials with negative binomially distributed endpoints. In particular, we develop a Wald-type test with a restricted maximum-likelihood variance estimator for testing non-inferiority or superiority. For this test, sample size and power formulas as well as optimal sample size allocations will be derived. The performance of the proposed test will be assessed in an extensive simulation study with regard to type I error rate, power, sample size, and sample size allocation. For the purpose of comparison, Wald-type statistics with a sample variance estimator and an unrestricted maximum-likelihood estimator are included in the simulation study. We found that the proposed Wald-type test with a restricted variance estimator performed well across the considered scenarios and is therefore recommended for application in clinical trials. The methods proposed are motivated and illustrated by a recent clinical trial in multiple sclerosis. The R package ThreeArmedTrials, which implements the methods discussed in this paper, is available on CRAN. Copyright © 2015 John Wiley & Sons, Ltd.
[Evaluation of estimation of prevalence ratio using bayesian log-binomial regression model].
Gao, W L; Lin, H; Liu, X N; Ren, X W; Li, J S; Shen, X P; Zhu, S L
2017-03-10
To evaluate the estimation of prevalence ratio ( PR ) by using bayesian log-binomial regression model and its application, we estimated the PR of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea in their infants by using bayesian log-binomial regression model in Openbugs software. The results showed that caregivers' recognition of infant' s risk signs of diarrhea was associated significantly with a 13% increase of medical care-seeking. Meanwhile, we compared the differences in PR 's point estimation and its interval estimation of medical care-seeking prevalence to caregivers' recognition of risk signs of diarrhea and convergence of three models (model 1: not adjusting for the covariates; model 2: adjusting for duration of caregivers' education, model 3: adjusting for distance between village and township and child month-age based on model 2) between bayesian log-binomial regression model and conventional log-binomial regression model. The results showed that all three bayesian log-binomial regression models were convergence and the estimated PRs were 1.130(95 %CI : 1.005-1.265), 1.128(95 %CI : 1.001-1.264) and 1.132(95 %CI : 1.004-1.267), respectively. Conventional log-binomial regression model 1 and model 2 were convergence and their PRs were 1.130(95 % CI : 1.055-1.206) and 1.126(95 % CI : 1.051-1.203), respectively, but the model 3 was misconvergence, so COPY method was used to estimate PR , which was 1.125 (95 %CI : 1.051-1.200). In addition, the point estimation and interval estimation of PRs from three bayesian log-binomial regression models differed slightly from those of PRs from conventional log-binomial regression model, but they had a good consistency in estimating PR . Therefore, bayesian log-binomial regression model can effectively estimate PR with less misconvergence and have more advantages in application compared with conventional log-binomial regression model.
Dynamic equilibrium of reconstituting hematopoietic stem cell populations.
O'Quigley, John
2010-12-01
Clonal dominance in hematopoietic stem cell populations is an important question of interest but not one we can directly answer. Any estimates are based on indirect measurement. For marked populations, we can equate empirical and theoretical moments for binomial sampling, in particular we can use the well-known formula for the sampling variation of a binomial proportion. The empirical variance itself cannot always be reliably estimated and some caution is needed. We describe the difficulties here and identify ready solutions which only require appropriate use of variance-stabilizing transformations. From these we obtain estimators for the steady state, or dynamic equilibrium, of the number of hematopoietic stem cells involved in repopulating the marrow. The calculations themselves are not too involved. We give the distribution theory for the estimator as well as simple approximations for practical application. As an illustration, we rework on data recently gathered to address the question as to whether or not reconstitution of marrow grafts in the clinical setting might be considered to be oligoclonal.
Meta-analysis of studies with bivariate binary outcomes: a marginal beta-binomial model approach.
Chen, Yong; Hong, Chuan; Ning, Yang; Su, Xiao
2016-01-15
When conducting a meta-analysis of studies with bivariate binary outcomes, challenges arise when the within-study correlation and between-study heterogeneity should be taken into account. In this paper, we propose a marginal beta-binomial model for the meta-analysis of studies with binary outcomes. This model is based on the composite likelihood approach and has several attractive features compared with the existing models such as bivariate generalized linear mixed model (Chu and Cole, 2006) and Sarmanov beta-binomial model (Chen et al., 2012). The advantages of the proposed marginal model include modeling the probabilities in the original scale, not requiring any transformation of probabilities or any link function, having closed-form expression of likelihood function, and no constraints on the correlation parameter. More importantly, because the marginal beta-binomial model is only based on the marginal distributions, it does not suffer from potential misspecification of the joint distribution of bivariate study-specific probabilities. Such misspecification is difficult to detect and can lead to biased inference using currents methods. We compare the performance of the marginal beta-binomial model with the bivariate generalized linear mixed model and the Sarmanov beta-binomial model by simulation studies. Interestingly, the results show that the marginal beta-binomial model performs better than the Sarmanov beta-binomial model, whether or not the true model is Sarmanov beta-binomial, and the marginal beta-binomial model is more robust than the bivariate generalized linear mixed model under model misspecifications. Two meta-analyses of diagnostic accuracy studies and a meta-analysis of case-control studies are conducted for illustration. Copyright © 2015 John Wiley & Sons, Ltd.
Orphan therapies: making best use of postmarket data.
Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling
2014-08-01
Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.
Koopmeiners, Joseph S; Feng, Ziding
2011-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.
Koopmeiners, Joseph S.; Feng, Ziding
2013-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313
Studying the Binomial Distribution Using LabVIEW
ERIC Educational Resources Information Center
George, Danielle J.; Hammer, Nathan I.
2015-01-01
This undergraduate physical chemistry laboratory exercise introduces students to the study of probability distributions both experimentally and using computer simulations. Students perform the classic coin toss experiment individually and then pool all of their data together to study the effect of experimental sample size on the binomial…
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
M-Bonomial Coefficients and Their Identities
ERIC Educational Resources Information Center
Asiru, Muniru A.
2010-01-01
In this note, we introduce M-bonomial coefficients or (M-bonacci binomial coefficients). These are similar to the binomial and the Fibonomial (or Fibonacci-binomial) coefficients and can be displayed in a triangle similar to Pascal's triangle from which some identities become obvious.
Kingsley, I.S.
1987-01-06
A process and apparatus are disclosed for the separation of complex mixtures of carbonaceous material by sequential elution with successively stronger solvents. In the process, a column containing glass beads is maintained in a fluidized state by a rapidly flowing stream of a weak solvent, and the sample is injected into this flowing stream such that a portion of the sample is dissolved therein and the remainder of the sample is precipitated therein and collected as a uniform deposit on the glass beads. Successively stronger solvents are then passed through the column to sequentially elute less soluble materials. 1 fig.
Performance and structure of single-mode bosonic codes
NASA Astrophysics Data System (ADS)
Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang
2018-03-01
The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.
A boundary-optimized rejection region test for the two-sample binomial problem.
Gabriel, Erin E; Nason, Martha; Fay, Michael P; Follmann, Dean A
2018-03-30
Testing the equality of 2 proportions for a control group versus a treatment group is a well-researched statistical problem. In some settings, there may be strong historical data that allow one to reliably expect that the control proportion is one, or nearly so. While one-sample tests or comparisons to historical controls could be used, neither can rigorously control the type I error rate in the event the true control rate changes. In this work, we propose an unconditional exact test that exploits the historical information while controlling the type I error rate. We sequentially construct a rejection region by first maximizing the rejection region in the space where all controls have an event, subject to the constraint that our type I error rate does not exceed α for any true event rate; then with any remaining α we maximize the additional rejection region in the space where one control avoids the event, and so on. When the true control event rate is one, our test is the most powerful nonrandomized test for all points in the alternative space. When the true control event rate is nearly one, we demonstrate that our test has equal or higher mean power, averaging over the alternative space, than a variety of well-known tests. For the comparison of 4 controls and 4 treated subjects, our proposed test has higher power than all comparator tests. We demonstrate the properties of our proposed test by simulation and use our method to design a malaria vaccine trial. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
Serra, Gerardo V.; Porta, Norma C. La; Avalos, Susana; Mazzuferi, Vilma
2013-01-01
The alfalfa caterpillar, Colias lesbia (Fabricius) (Lepidoptera: Pieridae), is a major pest of alfalfa, Medicago sativa L. (Fabales: Fabaceae), crops in Argentina. Its management is based mainly on chemical control of larvae whenever the larvae exceed the action threshold. To develop and validate fixed-precision sequential sampling plans, an intensive sampling programme for C. lesbia eggs was carried out in two alfalfa plots located in the Province of Córdoba, Argentina, from 1999 to 2002. Using Resampling for Validation of Sampling Plans software, 12 additional independent data sets were used to validate the sequential sampling plan with precision levels of 0.10 and 0.25 (SE/mean), respectively. For a range of mean densities of 0.10 to 8.35 eggs/sample, an average sample size of only 27 and 26 sample units was required to achieve a desired precision level of 0.25 for the sampling plans of Green and Kuno, respectively. As the precision level was increased to 0.10, average sample size increased to 161 and 157 sample units for the sampling plans of Green and Kuno, respectively. We recommend using Green's sequential sampling plan because it is less sensitive to changes in egg density. These sampling plans are a valuable tool for researchers to study population dynamics and to evaluate integrated pest management strategies. PMID:23909840
Problems on Divisibility of Binomial Coefficients
ERIC Educational Resources Information Center
Osler, Thomas J.; Smoak, James
2004-01-01
Twelve unusual problems involving divisibility of the binomial coefficients are represented in this article. The problems are listed in "The Problems" section. All twelve problems have short solutions which are listed in "The Solutions" section. These problems could be assigned to students in any course in which the binomial theorem and Pascal's…
Sequential air sampler system : its use by the Virginia Department of Highways & Transportation.
DOT National Transportation Integrated Search
1975-01-01
The Department of Highways & Transportation needs an economical and efficient air quality sampling system for meeting requirements on air monitoring for proposed projects located In critical areas. Two sequential air sampling systems, the ERAI and th...
Non-stochastic sampling error in quantal analyses for Campylobacter species on poultry products
USDA-ARS?s Scientific Manuscript database
Using primers and fluorescent probes specific for the most common foodborne Campylobacter species (C. jejuni = Cj and C. coli = Cc), we developed a multiplex, most probable number (MPN) assay using quantitative PCR (qPCR) as the determinant for binomial detection: number of p positives out of n = 6 ...
CUMBIN - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, CUMBIN, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), can be used independently of one another. CUMBIN can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CUMBIN calculates the probability that a system of n components has at least k operating if the probability that any one operating is p and the components are independent. Equivalently, this is the reliability of a k-out-of-n system having independent components with common reliability p. CUMBIN can evaluate the incomplete beta distribution for two positive integer arguments. CUMBIN can also evaluate the cumulative F distribution and the negative binomial distribution, and can determine the sample size in a test design. CUMBIN is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. The CUMBIN program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CUMBIN was developed in 1988.
Koopmeiners, Joseph S.; Feng, Ziding
2015-01-01
Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180
Kiermeier, Andreas; Mellor, Glen; Barlow, Robert; Jenson, Ian
2011-04-01
The aims of this work were to determine the distribution and concentration of Escherichia coli O157 in lots of beef destined for grinding (manufacturing beef) that failed to meet Australian requirements for export, to use these data to better understand the performance of sampling plans based on the binomial distribution, and to consider alternative approaches for evaluating sampling plans. For each of five lots from which E. coli O157 had been detected, 900 samples from the external carcass surface were tested. E. coli O157 was not detected in three lots, whereas in two lots E. coli O157 was detected in 2 and 74 samples. For lots in which E. coli O157 was not detected in the present study, the E. coli O157 level was estimated to be <12 cells per 27.2-kg carton. For the most contaminated carton, the total number of E. coli O157 cells was estimated to be 813. In the two lots in which E. coli O157 was detected, the pathogen was detected in 1 of 12 and 2 of 12 cartons. The use of acceptance sampling plans based on a binomial distribution can provide a falsely optimistic view of the value of sampling as a control measure when applied to assessment of E. coli O157 contamination in manufacturing beef. Alternative approaches to understanding sampling plans, which do not assume homogeneous contamination throughout the lot, appear more realistic. These results indicate that despite the application of stringent sampling plans, sampling and testing approaches are inefficient for controlling microbiological quality.
A Three-Parameter Generalisation of the Beta-Binomial Distribution with Applications
1987-07-01
York. Rust, R.T. and Klompmaker, J.E. (1981). Improving the estimation procedure for the beta binomial t.v. exposure model. Journal of Marketing ... Research . 18, 442-448. Sabavala, D.J. and Morrison, D.G. (1977). Television show loyalty: a beta- binomial model using recall data. Journal of Advertiuing
Distribution-free Inference of Zero-inated Binomial Data for Longitudinal Studies.
He, H; Wang, W J; Hu, J; Gallop, R; Crits-Christoph, P; Xia, Y L
2015-10-01
Count reponses with structural zeros are very common in medical and psychosocial research, especially in alcohol and HIV research, and the zero-inflated poisson (ZIP) and zero-inflated negative binomial (ZINB) models are widely used for modeling such outcomes. However, as alcohol drinking outcomes such as days of drinkings are counts within a given period, their distributions are bounded above by an upper limit (total days in the period) and thus inherently follow a binomial or zero-inflated binomial (ZIB) distribution, rather than a Poisson or zero-inflated Poisson (ZIP) distribution, in the presence of structural zeros. In this paper, we develop a new semiparametric approach for modeling zero-inflated binomial (ZIB)-like count responses for cross-sectional as well as longitudinal data. We illustrate this approach with both simulated and real study data.
NASA Astrophysics Data System (ADS)
Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S.
2018-03-01
A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band.
de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M
2018-04-01
Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.
Revealing Word Order: Using Serial Position in Binomials to Predict Properties of the Speaker
ERIC Educational Resources Information Center
Iliev, Rumen; Smirnova, Anastasia
2016-01-01
Three studies test the link between word order in binomials and psychological and demographic characteristics of a speaker. While linguists have already suggested that psychological, cultural and societal factors are important in choosing word order in binomials, the vast majority of relevant research was focused on general factors and on broadly…
DRME: Count-based differential RNA methylation analysis at small sample size scenario.
Liu, Lian; Zhang, Shao-Wu; Gao, Fan; Zhang, Yixin; Huang, Yufei; Chen, Runsheng; Meng, Jia
2016-04-15
Differential methylation, which concerns difference in the degree of epigenetic regulation via methylation between two conditions, has been formulated as a beta or beta-binomial distribution to address the within-group biological variability in sequencing data. However, a beta or beta-binomial model is usually difficult to infer at small sample size scenario with discrete reads count in sequencing data. On the other hand, as an emerging research field, RNA methylation has drawn more and more attention recently, and the differential analysis of RNA methylation is significantly different from that of DNA methylation due to the impact of transcriptional regulation. We developed DRME to better address the differential RNA methylation problem. The proposed model can effectively describe within-group biological variability at small sample size scenario and handles the impact of transcriptional regulation on RNA methylation. We tested the newly developed DRME algorithm on simulated and 4 MeRIP-Seq case-control studies and compared it with Fisher's exact test. It is in principle widely applicable to several other RNA-related data types as well, including RNA Bisulfite sequencing and PAR-CLIP. The code together with an MeRIP-Seq dataset is available online (https://github.com/lzcyzm/DRME) for evaluation and reproduction of the figures shown in this article. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Sullins, Walter L.
Five-hundred dichotomously scored response patterns were generated with sequentially independent (SI) items and 500 with dependent (SD) items for each of thirty-six combinations of sampling parameters (i.e., three test lengths, three sample sizes, and four item difficulty distributions). KR-20, KR-21, and Split-Half (S-H) reliabilities were…
An analytical framework for estimating aquatic species density from environmental DNA
Chambert, Thierry; Pilliod, David S.; Goldberg, Caren S.; Doi, Hideyuki; Takahara, Teruhiko
2018-01-01
Environmental DNA (eDNA) analysis of water samples is on the brink of becoming a standard monitoring method for aquatic species. This method has improved detection rates over conventional survey methods and thus has demonstrated effectiveness for estimation of site occupancy and species distribution. The frontier of eDNA applications, however, is to infer species density. Building upon previous studies, we present and assess a modeling approach that aims at inferring animal density from eDNA. The modeling combines eDNA and animal count data from a subset of sites to estimate species density (and associated uncertainties) at other sites where only eDNA data are available. As a proof of concept, we first perform a cross-validation study using experimental data on carp in mesocosms. In these data, fish densities are known without error, which allows us to test the performance of the method with known data. We then evaluate the model using field data from a study on a stream salamander species to assess the potential of this method to work in natural settings, where density can never be known with absolute certainty. Two alternative distributions (Normal and Negative Binomial) to model variability in eDNA concentration data are assessed. Assessment based on the proof of concept data (carp) revealed that the Negative Binomial model provided much more accurate estimates than the model based on a Normal distribution, likely because eDNA data tend to be overdispersed. Greater imprecision was found when we applied the method to the field data, but the Negative Binomial model still provided useful density estimates. We call for further model development in this direction, as well as further research targeted at sampling design optimization. It will be important to assess these approaches on a broad range of study systems.
De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos
2014-06-01
Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.
Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S
2018-03-01
A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band. Copyright © 2018 Elsevier Inc. All rights reserved.
Increasing efficiency of preclinical research by group sequential designs
Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich
2017-01-01
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371
NASA Astrophysics Data System (ADS)
Arneodo, M.; Arvidson, A.; Aubert, J. J.; Badełek, B.; Beaufays, J.; Bee, C. P.; Benchouk, C.; Berghoff, G.; Bird, I.; Blum, D.; Böhm, E.; de Bouard, X.; Brasse, F. W.; Braun, H.; Broll, C.; Brown, S.; Brück, H.; Calen, H.; Chima, J. S.; Ciborowski, J.; Clifft, R.; Coignet, G.; Combley, F.; Coughlan, J.; D'Agostini, G.; Dahlgren, S.; Dengler, F.; Derado, I.; Dreyer, T.; Drees, J.; Düren, M.; Eckardt, V.; Edwards, A.; Edwards, M.; Ernst, T.; Eszes, G.; Favier, J.; Ferrero, M. I.; Figiel, J.; Flauger, W.; Foster, J.; Ftáčnik, J.; Gabathuler, E.; Gajewski, J.; Gamet, R.; Gayler, J.; Geddes, N.; Grafström, P.; Grard, F.; Haas, J.; Hagberg, E.; Hasert, F. J.; Hayman, P.; Heusse, P.; Jaffré, M.; Jachołkowska, A.; Janata, F.; Jancsó, G.; Johnson, A. S.; Kabuss, E. M.; Kellner, G.; Korbel, V.; Krüger, J.; Kullander, S.; Landgraf, U.; Lanske, D.; Loken, J.; Long, K.; Maire, M.; Malecki, P.; Manz, A.; Maselli, S.; Mohr, W.; Montanet, F.; Montgomery, H. E.; Nagy, E.; Nassalski, J.; Norton, P. R.; Oakham, F. G.; Osborne, A. M.; Pascaud, C.; Pawlik, B.; Payre, P.; Peroni, C.; Peschel, H.; Pessard, H.; Pettinghale, J.; Pietrzyk, B.; Pietrzyk, U.; Pönsgen, B.; Pötsch, M.; Renton, P.; Ribarics, P.; Rith, K.; Rondio, E.; Sandacz, A.; Scheer, M.; Schlagböhmer, A.; Schiemann, H.; Schmitz, N.; Schneegans, M.; Schneider, A.; Scholz, M.; Schröder, T.; Schultze, K.; Sloan, T.; Stier, H. E.; Studt, M.; Taylor, G. N.; Thénard, J. M.; Thompson, J. C.; de La Torre, A.; Toth, J.; Urban, L.; Urban, L.; Wallucks, W.; Whalley, M.; Wheeler, S.; Williams, W. S. C.; Wimpenny, S. J.; Windmolders, R.; Wolf, G.
1987-09-01
The multiplicity distributions of charged hadrons produced in the deep inelastic muon-proton scattering at 280 GeV are analysed in various rapidity intervals, as a function of the total hadronic centre of mass energy W ranging from 4 20 GeV. Multiplicity distributions for the backward and forward hemispheres are also analysed separately. The data can be well parameterized by binomial distributions, extending their range of applicability to the case of lepton-proton scattering. The energy and the rapidity dependence of the parameters is presented and a smooth transition from the negative binomial distribution via Poissonian to the ordinary binomial is observed.
Censored Hurdle Negative Binomial Regression (Case Study: Neonatorum Tetanus Case in Indonesia)
NASA Astrophysics Data System (ADS)
Yuli Rusdiana, Riza; Zain, Ismaini; Wulan Purnami, Santi
2017-06-01
Hurdle negative binomial model regression is a method that can be used for discreate dependent variable, excess zero and under- and overdispersion. It uses two parts approach. The first part estimates zero elements from dependent variable is zero hurdle model and the second part estimates not zero elements (non-negative integer) from dependent variable is called truncated negative binomial models. The discrete dependent variable in such cases is censored for some values. The type of censor that will be studied in this research is right censored. This study aims to obtain the parameter estimator hurdle negative binomial regression for right censored dependent variable. In the assessment of parameter estimation methods used Maximum Likelihood Estimator (MLE). Hurdle negative binomial model regression for right censored dependent variable is applied on the number of neonatorum tetanus cases in Indonesia. The type data is count data which contains zero values in some observations and other variety value. This study also aims to obtain the parameter estimator and test statistic censored hurdle negative binomial model. Based on the regression results, the factors that influence neonatorum tetanus case in Indonesia is the percentage of baby health care coverage and neonatal visits.
Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.
Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric
2017-02-01
Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
Group-sequential three-arm noninferiority clinical trial designs
Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko
2016-01-01
We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
Naik, Umesh Chandra; Das, Mihir Tanay; Sauran, Swati; Thakur, Indu Shekhar
2014-03-01
The present study compares in vitro toxicity of electroplating effluent after the batch treatment process with that obtained after the sequential treatment process. Activated charcoal prepared from sugarcane bagasse through chemical carbonization, and tolerant indigenous bacteria, Bacillus sp. strain IST105, were used individually and sequentially for the treatment of electroplating effluent. The sequential treatment involving activated charcoal followed by bacterial treatment removed 99% of Cr(VI) compared with the batch processes, which removed 40% (charcoal) and 75% (bacteria), respectively. Post-treatment in vitro cyto/genotoxicity was evaluated by the MTT test and the comet assay in human HuH-7 hepatocarcinoma cells. The sequentially treated sample showed an increase in LC50 value with a 6-fold decrease in comet-assay DNA migration compared with that of untreated samples. A significant decrease in DNA migration and an increase in LC50 value of treated effluent proved the higher effectiveness of the sequential treatment process over the individual batch processes. Copyright © 2014 Elsevier B.V. All rights reserved.
van Staden, J F; Mashamba, Mulalo G; Stefan, Raluca I
2002-09-01
An on-line potentiometric sequential injection titration process analyser for the determination of acetic acid is proposed. A solution of 0.1 mol L(-1) sodium chloride is used as carrier. Titration is achieved by aspirating acetic acid samples between two strong base-zone volumes into a holding coil and by channelling the stack of well-defined zones with flow reversal through a reaction coil to a potentiometric sensor where the peak widths were measured. A linear relationship between peak width and logarithm of the acid concentration was obtained in the range 1-9 g/100 mL. Vinegar samples were analysed without any sample pre-treatment. The method has a relative standard deviation of 0.4% with a sample frequency of 28 samples per hour. The results revealed good agreement between the proposed sequential injection and an automated batch titration method.
Deus, E. G.; Godoy, W. A. C.; Sousa, M. S. M.; Lopes, G. N.; Jesus-Barros, C. R.; Silva, J. G.; Adaime, R.
2016-01-01
Field infestation and spatial distribution of introduced Bactrocera carambolae Drew and Hancock and native species of Anastrepha in common guavas [Psidium guajava (L.)] were investigated in the eastern Amazon. Fruit sampling was carried out in the municipalities of Calçoene and Oiapoque in the state of Amapá, Brazil. The frequency distribution of larvae in fruit was fitted to the negative binomial distribution. Anastrepha striata was more abundant in both sampled areas in comparison to Anastrepha fraterculus (Wiedemann) and B. carambolae. The frequency distribution analysis of adults revealed an aggregated pattern for B. carambolae as well as for A. fraterculus and Anastrepha striata Schiner, described by the negative binomial distribution. Although the populations of Anastrepha spp. may have suffered some impact due to the presence of B. carambolae, the results are still not robust enough to indicate effective reduction in the abundance of Anastrepha spp. caused by B. carambolae in a general sense. The high degree of aggregation observed for both species suggests interspecific co-occurrence with the simultaneous presence of both species in the analysed fruit. Moreover, a significant fraction of uninfested guavas also indicated absence of competitive displacement. PMID:27638949
Tran, Phoebe; Waller, Lance
2015-01-01
Lyme disease has been the subject of many studies due to increasing incidence rates year after year and the severe complications that can arise in later stages of the disease. Negative binomial models have been used to model Lyme disease in the past with some success. However, there has been little focus on the reliability and consistency of these models when they are used to study Lyme disease at multiple spatial scales. This study seeks to explore how sensitive/consistent negative binomial models are when they are used to study Lyme disease at different spatial scales (at the regional and sub-regional levels). The study area includes the thirteen states in the Northeastern United States with the highest Lyme disease incidence during the 2002-2006 period. Lyme disease incidence at county level for the period of 2002-2006 was linked with several previously identified key landscape and climatic variables in a negative binomial regression model for the Northeastern region and two smaller sub-regions (the New England sub-region and the Mid-Atlantic sub-region). This study found that negative binomial models, indeed, were sensitive/inconsistent when used at different spatial scales. We discuss various plausible explanations for such behavior of negative binomial models. Further investigation of the inconsistency and sensitivity of negative binomial models when used at different spatial scales is important for not only future Lyme disease studies and Lyme disease risk assessment/management but any study that requires use of this model type in a spatial context. Copyright © 2014 Elsevier Inc. All rights reserved.
Bisseleua, D H B; Vidal, Stefan
2011-02-01
The spatio-temporal distribution of Sahlbergella singularis Haglung, a major pest of cacao trees (Theobroma cacao) (Malvaceae), was studied for 2 yr in traditional cacao forest gardens in the humid forest area of southern Cameroon. The first objective was to analyze the dispersion of this insect on cacao trees. The second objective was to develop sampling plans based on fixed levels of precision for estimating S. singularis populations. The following models were used to analyze the data: Taylor's power law, Iwao's patchiness regression, the Nachman model, and the negative binomial distribution. Our results document that Taylor's power law was a better fit for the data than the Iwao and Nachman models. Taylor's b and Iwao's β were both significantly >1, indicating that S. singularis aggregated on specific trees. This result was further supported by the calculated common k of 1.75444. Iwao's α was significantly <0, indicating that the basic distribution component of S. singularis was the individual insect. Comparison of negative binomial (NBD) and Nachman models indicated that the NBD model was appropriate for studying S. singularis distribution. Optimal sample sizes for fixed precision levels of 0.10, 0.15, and 0.25 were estimated with Taylor's regression coefficients. Required sample sizes increased dramatically with increasing levels of precision. This is the first study on S. singularis dispersion in cacao plantations. Sampling plans, presented here, should be a tool for research on population dynamics and pest management decisions of mirid bugs on cacao. © 2011 Entomological Society of America
ERIC Educational Resources Information Center
van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels
2012-01-01
This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use…
ERIC Educational Resources Information Center
Levin, Eugene M.
1981-01-01
Student access to programmable calculators and computer terminals, coupled with a familiarity with baseball, provides opportunities to enhance their understanding of the binomial distribution and other aspects of analysis. (MP)
NASA Astrophysics Data System (ADS)
Rajakaruna, Harshana; VandenByllaardt, Julie; Kydd, Jocelyn; Bailey, Sarah
2018-03-01
The International Maritime Organization (IMO) has set limits on allowable plankton concentrations in ballast water discharge to minimize aquatic invasions globally. Previous guidance on ballast water sampling and compliance decision thresholds was based on the assumption that probability distributions of plankton are Poisson when spatially homogenous, or negative binomial when heterogeneous. We propose a hierarchical probability model, which incorporates distributions at the level of particles (i.e., discrete individuals plus colonies per unit volume) and also within particles (i.e., individuals per particle) to estimate the average plankton concentration in ballast water. We examined the performance of the models using data for plankton in the size class ≥ 10 μm and < 50 μm, collected from five different depths of a ballast tank of a commercial ship in three independent surveys. We show that the data fit to the negative binomial and the hierarchical probability models equally well, with both models performing better than the Poisson model at the scale of our sampling. The hierarchical probability model, which accounts for both the individuals and the colonies in a sample, reduces the uncertainty associated with the concentration estimation, and improves the power of rejecting the decision on ship's compliance when a ship does not truly comply with the standard. We show examples of how to test ballast water compliance using the above models.
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
New Class of Quantum Error-Correcting Codes for a Bosonic Mode
NASA Astrophysics Data System (ADS)
Michael, Marios H.; Silveri, Matti; Brierley, R. T.; Albert, Victor V.; Salmilehto, Juha; Jiang, Liang; Girvin, S. M.
2016-07-01
We construct a new class of quantum error-correcting codes for a bosonic mode, which are advantageous for applications in quantum memories, communication, and scalable computation. These "binomial quantum codes" are formed from a finite superposition of Fock states weighted with binomial coefficients. The binomial codes can exactly correct errors that are polynomial up to a specific degree in bosonic creation and annihilation operators, including amplitude damping and displacement noise as well as boson addition and dephasing errors. For realistic continuous-time dissipative evolution, the codes can perform approximate quantum error correction to any given order in the time step between error detection measurements. We present an explicit approximate quantum error recovery operation based on projective measurements and unitary operations. The binomial codes are tailored for detecting boson loss and gain errors by means of measurements of the generalized number parity. We discuss optimization of the binomial codes and demonstrate that by relaxing the parity structure, codes with even lower unrecoverable error rates can be achieved. The binomial codes are related to existing two-mode bosonic codes, but offer the advantage of requiring only a single bosonic mode to correct amplitude damping as well as the ability to correct other errors. Our codes are similar in spirit to "cat codes" based on superpositions of the coherent states but offer several advantages such as smaller mean boson number, exact rather than approximate orthonormality of the code words, and an explicit unitary operation for repumping energy into the bosonic mode. The binomial quantum codes are realizable with current superconducting circuit technology, and they should prove useful in other quantum technologies, including bosonic quantum memories, photonic quantum communication, and optical-to-microwave up- and down-conversion.
Effects of Test Level Discrimination and Difficulty on Answer-Copying Indices
ERIC Educational Resources Information Center
Sunbul, Onder; Yormaz, Seha
2018-01-01
In this study Type I Error and the power rates of omega (?) and GBT (generalized binomial test) indices were investigated for several nominal alpha levels and for 40 and 80-item test lengths with 10,000-examinee sample size under several test level restrictions. As a result, Type I error rates of both indices were found to be below the acceptable…
Contemporary multilevel analysis of the effectiveness of water fluoridation in Australia.
Do, Loc; Spencer, A John
2015-02-01
Water fluoridation was extended in Queensland, Australia, across 2009-2011. A research program was commenced to inform the rationale for and the outcome of this program, to estimate the effectiveness of water fluoridation in preventing caries and to predict changes in caries experience as a result of the extension of fluoridation. Queensland children were selected through a stratified random sample selection in 2010-2012. Oral epidemiological examinations provided individual-level outcomes for decayed, missing or filled primary or permanent tooth surfaces: dmfs (among 5-8-year-olds) and DMFS (9-14-year-olds). Explanatory factors at the individual-level, school-level and area-level fluoridation status were derived. Data were weighted to represent the population. Three-level multilevel multivariable models were sequentially specified for negative binomial distribution of dmfs/DMFS to estimate rate ratios (RR). The effectiveness of area-level water fluoridation was evaluated in the full models controlling for other factors. Data from 2,214 5-8 year-olds and 3,186 9-14 year-olds from 207 schools in 16 areas were analysed. Queensland's average dmfs was 4.23 and DMFS 1.47. The lowest levels of dental caries were observed in long-term fluoridated Townsville. In the full models, Townsville children had significantly lower caries experience (RR for dmfs: 0.61 (95%CI: 0.44-0.82); RR for DMFS 0.60 (95%CI: 0.42-0.88)) compared with children in non-fluoridated areas. Comparison of caries experience of children at the time of the extension of water fluoridation supported the rationale for this population health measure. © 2014 Public Health Association of Australia.
Chen, Wansu; Shi, Jiaxiao; Qian, Lei; Azen, Stanley P
2014-06-26
To estimate relative risks or risk ratios for common binary outcomes, the most popular model-based methods are the robust (also known as modified) Poisson and the log-binomial regression. Of the two methods, it is believed that the log-binomial regression yields more efficient estimators because it is maximum likelihood based, while the robust Poisson model may be less affected by outliers. Evidence to support the robustness of robust Poisson models in comparison with log-binomial models is very limited. In this study a simulation was conducted to evaluate the performance of the two methods in several scenarios where outliers existed. The findings indicate that for data coming from a population where the relationship between the outcome and the covariate was in a simple form (e.g. log-linear), the two models yielded comparable biases and mean square errors. However, if the true relationship contained a higher order term, the robust Poisson models consistently outperformed the log-binomial models even when the level of contamination is low. The robust Poisson models are more robust (or less sensitive) to outliers compared to the log-binomial models when estimating relative risks or risk ratios for common binary outcomes. Users should be aware of the limitations when choosing appropriate models to estimate relative risks or risk ratios.
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
ERIC Educational Resources Information Center
Jacobson, Peggy F.; Walden, Patrick R.
2013-01-01
Purpose: This study explored the utility of language sample analysis for evaluating language ability in school-age Spanish-English sequential bilingual children. Specifically, the relative potential of lexical diversity and word/morpheme omission as predictors of typical or atypical language status was evaluated. Method: Narrative samples were…
ERIC Educational Resources Information Center
Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo
2015-01-01
Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…
40 CFR 53.34 - Test procedure for methods for PM10 and Class I methods for PM2.5.
Code of Federal Regulations, 2010 CFR
2010-07-01
... simultaneous PM10 or PM2.5 measurements as necessary (see table C-4 of this subpart), each set consisting of...) in appendix A to this subpart). (f) Sequential samplers. For sequential samplers, the sampler shall be configured for the maximum number of sequential samples and shall be set for automatic collection...
Reduction of display artifacts by random sampling
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.
1983-01-01
The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.
Speech-discrimination scores modeled as a binomial variable.
Thornton, A R; Raffin, M J
1978-09-01
Many studies have reported variability data for tests of speech discrimination, and the disparate results of these studies have not been given a simple explanation. Arguments over the relative merits of 25- vs 50-word tests have ignored the basic mathematical properties inherent in the use of percentage scores. The present study models performance on clinical tests of speech discrimination as a binomial variable. A binomial model was developed, and some of its characteristics were tested against data from 4120 scores obtained on the CID Auditory Test W-22. A table for determining significant deviations between scores was generated and compared to observed differences in half-list scores for the W-22 tests. Good agreement was found between predicted and observed values. Implications of the binomial characteristics of speech-discrimination scores are discussed.
Possibility and Challenges of Conversion of Current Virus Species Names to Linnaean Binomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Postler, Thomas S.; Clawson, Anna N.; Amarasinghe, Gaya K.
Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authorsmore » of this article expected when conceiving the experiment. [Arenaviridae; binomials; ICTV; International Committee on Taxonomy of Viruses; Mononegavirales; virus nomenclature; virus taxonomy.]« less
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Somnam, Sarawut; Jakmunee, Jaroon; Grudpan, Kate; Lenghor, Narong; Motomizu, Shoji
2008-12-01
An automated hydrodynamic sequential injection (HSI) system with spectrophotometric detection was developed. Thanks to the hydrodynamic injection principle, simple devices can be used for introducing reproducible microliter volumes of both sample and reagent into the flow channel to form stacked zones in a similar fashion to those in a sequential injection system. The zones were then pushed to the detector and a peak profile was recorded. The determination of nitrite and nitrate in water samples by employing the Griess reaction was chosen as a model. Calibration graphs with linearity in the range of 0.7 - 40 muM were obtained for both nitrite and nitrate. Detection limits were found to be 0.3 muM NO(2)(-) and 0.4 muM NO(3)(-), respectively, with a sample throughput of 20 h(-1) for consecutive determination of both the species. The developed system was successfully applied to the analysis of water samples, employing simple and cost-effective instrumentation and offering higher degrees of automation and low chemical consumption.
The coverage of a random sample from a biological community.
Engen, S
1975-03-01
A taxonomic group will frequently have a large number of species with small abundances. When a sample is drawn at random from this group, one is therefore faced with the problem that a large proportion of the species will not be discovered. A general definition of quantitative measures of "sample coverage" is proposed, and the problem of statistical inference is considered for two special cases, (1) the actual total relative abundance of those species that are represented in the sample, and (2) their relative contribution to the information index of diversity. The analysis is based on a extended version of the negative binomial species frequency model. The results are tabulated.
Statistical methods for the beta-binomial model in teratology.
Yamamoto, E; Yanagimoto, T
1994-01-01
The beta-binomial model is widely used for analyzing teratological data involving littermates. Recent developments in statistical analyses of teratological data are briefly reviewed with emphasis on the model. For statistical inference of the parameters in the beta-binomial distribution, separation of the likelihood introduces an likelihood inference. This leads to reducing biases of estimators and also to improving accuracy of empirical significance levels of tests. Separate inference of the parameters can be conducted in a unified way. PMID:8187716
Near real-time vaccine safety surveillance with partially accrued data.
Greene, Sharon K; Kulldorff, Martin; Yin, Ruihua; Yih, W Katherine; Lieu, Tracy A; Weintraub, Eric S; Lee, Grace M
2011-06-01
The Vaccine Safety Datalink (VSD) Project conducts near real-time vaccine safety surveillance using sequential analytic methods. Timely surveillance is critical in identifying potential safety problems and preventing additional exposure before most vaccines are administered. For vaccines that are administered during a short period, such as influenza vaccines, timeliness can be improved by undertaking analyses while risk windows following vaccination are ongoing and by accommodating predictable and unpredictable data accrual delays. We describe practical solutions to these challenges, which were adopted by the VSD Project during pandemic and seasonal influenza vaccine safety surveillance in 2009/2010. Adjustments were made to two sequential analytic approaches. The Poisson-based approach compared the number of pre-defined adverse events observed following vaccination with the number expected using historical data. The expected number was adjusted for the proportion of the risk window elapsed and the proportion of inpatient data estimated to have accrued. The binomial-based approach used a self-controlled design, comparing the observed numbers of events in risk versus comparison windows. Events were included in analysis only if they occurred during a week that had already passed for both windows. Analyzing data before risk windows fully elapsed improved the timeliness of safety surveillance. Adjustments for data accrual lags were tailored to each data source and avoided biasing analyses away from detecting a potential safety problem, particularly early during surveillance. The timeliness of vaccine and drug safety surveillance can be improved by properly accounting for partially elapsed windows and data accrual delays. Copyright © 2011 John Wiley & Sons, Ltd.
Decision making and sequential sampling from memory
Shadlen, Michael N.; Shohamy, Daphna
2016-01-01
Decisions take time, and as a rule more difficult decisions take more time. But this only raises the question of what consumes the time. For decisions informed by a sequence of samples of evidence, the answer is straightforward: more samples are available with more time. Indeed the speed and accuracy of such decisions are explained by the accumulation of evidence to a threshold or bound. However, the same framework seems to apply to decisions that are not obviously informed by sequences of evidence samples. Here we proffer the hypothesis that the sequential character of such tasks involves retrieval of evidence from memory. We explore this hypothesis by focusing on value-based decisions and argue that mnemonic processes can account for regularities in choice and decision time. We speculate on the neural mechanisms that link sampling of evidence from memory to circuits that represent the accumulated evidence bearing on a choice. We propose that memory processes may contribute to a wider class of decisions that conform to the regularities of choice-reaction time predicted by the sequential sampling framework. PMID:27253447
NASA Astrophysics Data System (ADS)
Amaliana, Luthfatul; Sa'adah, Umu; Wayan Surya Wardhani, Ni
2017-12-01
Tetanus Neonatorum is an infectious disease that can be prevented by immunization. The number of Tetanus Neonatorum cases in East Java Province is the highest in Indonesia until 2015. Tetanus Neonatorum data contain over dispersion and big enough proportion of zero-inflation. Negative Binomial (NB) regression is an alternative method when over dispersion happens in Poisson regression. However, the data containing over dispersion and zero-inflation are more appropriately analyzed by using Zero-Inflated Negative Binomial (ZINB) regression. The purpose of this study are: (1) to model Tetanus Neonatorum cases in East Java Province with 71.05 percent proportion of zero-inflation by using NB and ZINB regression, (2) to obtain the best model. The result of this study indicates that ZINB is better than NB regression with smaller AIC.
Analysis of generalized negative binomial distributions attached to hyperbolic Landau levels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chhaiba, Hassan, E-mail: chhaiba.hassan@gmail.com; Demni, Nizar, E-mail: nizar.demni@univ-rennes1.fr; Mouayn, Zouhair, E-mail: mouayn@fstbm.ac.ma
2016-07-15
To each hyperbolic Landau level of the Poincaré disc is attached a generalized negative binomial distribution. In this paper, we compute the moment generating function of this distribution and supply its atomic decomposition as a perturbation of the negative binomial distribution by a finitely supported measure. Using the Mandel parameter, we also discuss the nonclassical nature of the associated coherent states. Next, we derive a Lévy-Khintchine-type representation of its characteristic function when the latter does not vanish and deduce that it is quasi-infinitely divisible except for the lowest hyperbolic Landau level corresponding to the negative binomial distribution. By considering themore » total variation of the obtained quasi-Lévy measure, we introduce a new infinitely divisible distribution for which we derive the characteristic function.« less
A powerful and flexible approach to the analysis of RNA sequence count data.
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A
2011-10-01
A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean-variance relationships provides a flexible testing regimen that 'borrows' information across genes, while easily incorporating design effects and additional covariates. We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean-variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary data are available at Bioinformatics online.
A Statistical Treatment of Bioassay Pour Fractions
NASA Technical Reports Server (NTRS)
Barengoltz, Jack; Hughes, David W.
2014-01-01
The binomial probability distribution is used to treat the statistics of a microbiological sample that is split into two parts, with only one part evaluated for spore count. One wishes to estimate the total number of spores in the sample based on the counts obtained from the part that is evaluated (pour fraction). Formally, the binomial distribution is recharacterized as a function of the observed counts (successes), with the total number (trials) an unknown. The pour fraction is the probability of success per spore (trial). This distribution must be renormalized in terms of the total number. Finally, the new renormalized distribution is integrated and mathematically inverted to yield the maximum estimate of the total number as a function of a desired level of confidence ( P(
Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia
2018-01-01
Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between the two designs. Modelling these results for higher response rates and larger net sample sizes indicated that the sequential design was more cost and time-effective. This study contributes to the research available on implementing mixed-mode designs as part of public health surveys. Our findings show that SAQ-Paper and SAQ-Web questionnaires can be combined effectively. Sequential mixed-mode designs with higher rates of online respondents may be of greater benefit to studies with larger net sample sizes than concurrent mixed-mode designs.
Bakuza, Jared S.; Denwood, Matthew J.; Nkwengulila, Gamba
2017-01-01
Background Schistosoma mansoni is a parasite of major public health importance in developing countries, where it causes a neglected tropical disease known as intestinal schistosomiasis. However, the distribution of the parasite within many endemic regions is currently unknown, which hinders effective control. The purpose of this study was to characterize the prevalence and intensity of infection of S. mansoni in a remote area of western Tanzania. Methodology/Principal findings Stool samples were collected from 192 children and 147 adults residing in Gombe National Park and four nearby villages. Children were actively sampled in local schools, and adults were sampled passively by voluntary presentation at the local health clinics. The two datasets were therefore analysed separately. Faecal worm egg count (FWEC) data were analysed using negative binomial and zero-inflated negative binomial (ZINB) models with explanatory variables of site, sex, and age. The ZINB models indicated that a substantial proportion of the observed zero FWEC reflected a failure to detect eggs in truly infected individuals, meaning that the estimated true prevalence was much higher than the apparent prevalence as calculated based on the simple proportion of non-zero FWEC. For the passively sampled data from adults, the data were consistent with close to 100% true prevalence of infection. Both the prevalence and intensity of infection differed significantly between sites, but there were no significant associations with sex or age. Conclusions/Significance Overall, our data suggest a more widespread distribution of S. mansoni in this part of Tanzania than was previously thought. The apparent prevalence estimates substantially under-estimated the true prevalence as determined by the ZINB models, and the two types of sampling strategies also resulted in differing conclusions regarding prevalence of infection. We therefore recommend that future surveillance programmes designed to assess risk factors should use active sampling whenever possible, in order to avoid the self-selection bias associated with passive sampling. PMID:28934206
The Binomial Distribution in Shooting
ERIC Educational Resources Information Center
Chalikias, Miltiadis S.
2009-01-01
The binomial distribution is used to predict the winner of the 49th International Shooting Sport Federation World Championship in double trap shooting held in 2006 in Zagreb, Croatia. The outcome of the competition was definitely unexpected.
Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.
2014-01-01
Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm3 FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations. PMID:25186406
Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew
2014-03-01
Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association
The magnetisation distribution of the Ising model - a new approach
NASA Astrophysics Data System (ADS)
Hakan Lundow, Per; Rosengren, Anders
2010-03-01
A completely new approach to the Ising model in 1 to 5 dimensions is developed. We employ a generalisation of the binomial coefficients to describe the magnetisation distributions of the Ising model. For the complete graph this distribution is exact. For simple lattices of dimensions d=1 and d=5 the magnetisation distributions are remarkably well-fitted by the generalized binomial distributions. For d=4 we are only slightly less successful, while for d=2,3 we see some deviations (with exceptions!) between the generalized binomial and the Ising distribution. The results speak in favour of the generalized binomial distribution's correctness regarding their general behaviour in comparison to the Ising model. A theoretical analysis of the distribution's moments also lends support their being correct asymptotically, including the logarithmic corrections in d=4. The full extent to which they correctly model the Ising distribution, and for which graph families, is not settled though.
Phase transition and information cascade in a voting model
NASA Astrophysics Data System (ADS)
Hisakado, M.; Mori, S.
2010-08-01
In this paper, we introduce a voting model that is similar to a Keynesian beauty contest and analyse it from a mathematical point of view. There are two types of voters—copycat and independent—and two candidates. Our voting model is a binomial distribution (independent voters) doped in a beta binomial distribution (copycat voters). We find that the phase transition in this system is at the upper limit of t, where t is the time (or the number of the votes). Our model contains three phases. If copycats constitute a majority or even half of the total voters, the voting rate converges more slowly than it would in a binomial distribution. If independents constitute the majority of voters, the voting rate converges at the same rate as it would in a binomial distribution. We also study why it is difficult to estimate the conclusion of a Keynesian beauty contest when there is an information cascade.
Abstract knowledge versus direct experience in processing of binomial expressions
Morgan, Emily; Levy, Roger
2016-01-01
We ask whether word order preferences for binomial expressions of the form A and B (e.g. bread and butter) are driven by abstract linguistic knowledge of ordering constraints referencing the semantic, phonological, and lexical properties of the constituent words, or by prior direct experience with the specific items in questions. Using forced-choice and self-paced reading tasks, we demonstrate that online processing of never-before-seen binomials is influenced by abstract knowledge of ordering constraints, which we estimate with a probabilistic model. In contrast, online processing of highly frequent binomials is primarily driven by direct experience, which we estimate from corpus frequency counts. We propose a trade-off wherein processing of novel expressions relies upon abstract knowledge, while reliance upon direct experience increases with increased exposure to an expression. Our findings support theories of language processing in which both compositional generation and direct, holistic reuse of multi-word expressions play crucial roles. PMID:27776281
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
Predicting data saturation in qualitative surveys with mathematical models from ecological research.
Tran, Viet-Thi; Porcher, Raphael; Tran, Viet-Chi; Ravaud, Philippe
2017-02-01
Sample size in surveys with open-ended questions relies on the principle of data saturation. Determining the point of data saturation is complex because researchers have information on only what they have found. The decision to stop data collection is solely dictated by the judgment and experience of researchers. In this article, we present how mathematical modeling may be used to describe and extrapolate the accumulation of themes during a study to help researchers determine the point of data saturation. The model considers a latent distribution of the probability of elicitation of all themes and infers the accumulation of themes as arising from a mixture of zero-truncated binomial distributions. We illustrate how the model could be used with data from a survey with open-ended questions on the burden of treatment involving 1,053 participants from 34 different countries and with various conditions. The performance of the model in predicting the number of themes to be found with the inclusion of new participants was investigated by Monte Carlo simulations. Then, we tested how the slope of the expected theme accumulation curve could be used as a stopping criterion for data collection in surveys with open-ended questions. By doubling the sample size after the inclusion of initial samples of 25 to 200 participants, the model reliably predicted the number of themes to be found. Mean estimation error ranged from 3% to 1% with simulated data and was <2% with data from the study of the burden of treatment. Sequentially calculating the slope of the expected theme accumulation curve for every five new participants included was a feasible approach to balance the benefits of including these new participants in the study. In our simulations, a stopping criterion based on a value of 0.05 for this slope allowed for identifying 97.5% of the themes while limiting the inclusion of participants eliciting nothing new in the study. Mathematical models adapted from ecological research can accurately predict the point of data saturation in surveys with open-ended questions. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
On extinction time of a generalized endemic chain-binomial model.
Aydogmus, Ozgur
2016-09-01
We considered a chain-binomial epidemic model not conferring immunity after infection. Mean field dynamics of the model has been analyzed and conditions for the existence of a stable endemic equilibrium are determined. The behavior of the chain-binomial process is probabilistically linked to the mean field equation. As a result of this link, we were able to show that the mean extinction time of the epidemic increases at least exponentially as the population size grows. We also present simulation results for the process to validate our analytical findings. Copyright © 2016 Elsevier Inc. All rights reserved.
Solar San Diego: The Impact of Binomial Rate Structures on Real PV Systems; Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
VanGeet, O.; Brown, E.; Blair, T.
2008-05-01
There is confusion in the marketplace regarding the impact of solar photovoltaics (PV) on the user's actual electricity bill under California Net Energy Metering, particularly with binomial tariffs (those that include both demand and energy charges) and time-of-use (TOU) rate structures. The City of San Diego has extensive real-time electrical metering on most of its buildings and PV systems, with interval data for overall consumption and PV electrical production available for multiple years. This paper uses 2007 PV-system data from two city facilities to illustrate the impacts of binomial rate designs. The analysis will determine the energy and demand savingsmore » that the PV systems are achieving relative to the absence of systems. A financial analysis of PV-system performance under various rate structures is presented. The data revealed that actual demand and energy use benefits of binomial tariffs increase in summer months, when solar resources allow for maximized electricity production. In a binomial tariff system, varying on- and semi-peak times can result in approximately $1,100 change in demand charges per month over not having a PV system in place, an approximate 30% cost savings. The PV systems are also shown to have a 30%-50% reduction in facility energy charges in 2007.« less
Davis, Joshua S; Sud, Archana; O'Sullivan, Matthew V N; Robinson, James O; Ferguson, Patricia E; Foo, Hong; van Hal, Sebastiaan J; Ralph, Anna P; Howden, Benjamin P; Binks, Paula M; Kirby, Adrienne; Tong, Steven Y C; Tong, Steven; Davis, Joshua; Binks, Paula; Majumdar, Suman; Ralph, Anna; Baird, Rob; Gordon, Claire; Jeremiah, Cameron; Leung, Grace; Brischetto, Anna; Crowe, Amy; Dakh, Farshid; Whykes, Kelly; Kirkwood, Maria; Sud, Archana; Menon, Mahesh; Somerville, Lucy; Subedi, Shrada; Owen, Shirley; O'Sullivan, Matthew; Liu, Eunice; Zhou, Fei; Robinson, Owen; Coombs, Geoffrey; Ferguson, Patrician; Ralph, Anna; Liu, Eunice; Pollet, Simon; Van Hal, Sebastian; Foo, Hong; Van Hal, Sebastian; Davis, Rebecca
2016-01-15
In vitro laboratory and animal studies demonstrate a synergistic role for the combination of vancomycin and antistaphylococcal β-lactams for methicillin-resistant Staphylococcus aureus (MRSA) bacteremia. Prospective clinical data are lacking. In this open-label, multicenter, clinical trial, adults with MRSA bacteremia received vancomycin 1.5 g intravenously twice daily and were randomly assigned (1:1) to receive intravenous flucloxacillin 2 g every 6 hours for 7 days (combination group) or no additional therapy (standard therapy group). Participants were stratified by hospital and randomized in permuted blocks of variable size. Randomization codes were kept in sealed, sequentially numbered, opaque envelopes. The primary outcome was the duration of MRSA bacteremia in days. We randomly assigned 60 patients to receive vancomycin (n = 29), or vancomycin plus flucloxacillin (n = 31). The mean duration of bacteremia was 3.00 days in the standard therapy group and 1.94 days in the combination group. According to a negative binomial model, the mean time to resolution of bacteremia in the combination group was 65% (95% confidence interval, 41%-102%; P = .06) that in the standard therapy group. There was no difference in the secondary end points of 28- and 90-day mortality, metastatic infection, nephrotoxicity, or hepatotoxicity. Combining an antistaphylococcal β-lactam with vancomycin may shorten the duration of MRSA bacteremia. Further trials with a larger sample size and objective clinically relevant end points are warranted. Australian New Zealand Clinical Trials Registry: ACTRN12610000940077 (www.anzctr.org.au). © The Author 2015. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.
Clinical and MRI activity as determinants of sample size for pediatric multiple sclerosis trials
Verhey, Leonard H.; Signori, Alessio; Arnold, Douglas L.; Bar-Or, Amit; Sadovnick, A. Dessa; Marrie, Ruth Ann; Banwell, Brenda
2013-01-01
Objective: To estimate sample sizes for pediatric multiple sclerosis (MS) trials using new T2 lesion count, annualized relapse rate (ARR), and time to first relapse (TTFR) endpoints. Methods: Poisson and negative binomial models were fit to new T2 lesion and relapse count data, and negative binomial time-to-event and exponential models were fit to TTFR data of 42 children with MS enrolled in a national prospective cohort study. Simulations were performed by resampling from the best-fitting model of new T2 lesion count, number of relapses, or TTFR, under various assumptions of the effect size, trial duration, and model parameters. Results: Assuming a 50% reduction in new T2 lesions over 6 months, 90 patients/arm are required, whereas 165 patients/arm are required for a 40% treatment effect. Sample sizes for 2-year trials using relapse-related endpoints are lower than that for 1-year trials. For 2-year trials and a conservative assumption of overdispersion (ϑ), sample sizes range from 70 patients/arm (using ARR) to 105 patients/arm (TTFR) for a 50% reduction in relapses, and 230 patients/arm (ARR) to 365 patients/arm (TTFR) for a 30% relapse reduction. Assuming a less conservative ϑ, 2-year trials using ARR require 45 patients/arm (60 patients/arm for TTFR) for a 50% reduction in relapses and 145 patients/arm (200 patients/arm for TTFR) for a 30% reduction. Conclusion: Six-month phase II trials using new T2 lesion count as an endpoint are feasible in the pediatric MS population; however, trials powered on ARR or TTFR will need to be 2 years in duration and will require multicentered collaboration. PMID:23966255
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odéen, Henrik, E-mail: h.odeen@gmail.com; Diakite, Mahamadou; Todd, Nick
2014-09-15
Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemesmore » utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm{sup 3} FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations.« less
Phosphorus Concentrations in Sequentially Fractionated Soil Samples as Affected by Digestion Methods
do Nascimento, Carlos A. C.; Pagliari, Paulo H.; Schmitt, Djalma; He, Zhongqi; Waldrip, Heidi
2015-01-01
Sequential fractionation has helped improving our understanding of the lability and bioavailability of P in soil. Nevertheless, there have been no reports on how manipulation of the different fractions prior to analyses affects the total P (TP) concentrations measured. This study investigated the effects of sample digestion, filtration, and acidification on the TP concentrations determined by ICP-OES in 20 soil samples. Total P in extracts were either determined without digestion by ICP-OES, or ICP-OES following block digestion, or autoclave digestion. The effects of sample filtration, and acidification on undigested alkaline extracts prior to ICP-OES were also evaluated. Results showed that, TP concentrations were greatest in the block-digested extracts, though the variability introduced by the block-digestion was the highest. Acidification of NaHCO3 extracts resulted in lower TP concentrations, while acidification of NaOH randomly increased or decreased TP concentrations. The precision observed with ICP-OES of undigested extracts suggests this should be the preferred method for TP determination in sequentially extracted samples. Thus, observations reported in this work would be helpful in appropriate sample handling for P determination, thereby improving the precision of P determination. The results are also useful for literature data comparison and discussion when there are differences in sample treatments. PMID:26647644
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Using the Binomial Series to Prove the Arithmetic Mean-Geometric Mean Inequality
ERIC Educational Resources Information Center
Persky, Ronald L.
2003-01-01
In 1968, Leon Gerber compared (1 + x)[superscript a] to its kth partial sum as a binomial series. His result is stated and, as an application of this result, a proof of the arithmetic mean-geometric mean inequality is presented.
Four Bootstrap Confidence Intervals for the Binomial-Error Model.
ERIC Educational Resources Information Center
Lin, Miao-Hsiang; Hsiung, Chao A.
1992-01-01
Four bootstrap methods are identified for constructing confidence intervals for the binomial-error model. The extent to which similar results are obtained and the theoretical foundation of each method and its relevance and ranges of modeling the true score uncertainty are discussed. (SLD)
Phosphorus concentrations in sequentially fractionated soil samples as affected by digestion methods
USDA-ARS?s Scientific Manuscript database
Sequential fractionation has been used for several decades for improving our understanding on the effects of agricultural practices and management on the lability and bioavailability of phosphorus in soil, manure, and other soil amendments. Nevertheless, there have been no reports on how manipulatio...
Possibility and Challenges of Conversion of Current Virus Species Names to Linnaean Binomials.
Postler, Thomas S; Clawson, Anna N; Amarasinghe, Gaya K; Basler, Christopher F; Bavari, Sbina; Benko, Mária; Blasdell, Kim R; Briese, Thomas; Buchmeier, Michael J; Bukreyev, Alexander; Calisher, Charles H; Chandran, Kartik; Charrel, Rémi; Clegg, Christopher S; Collins, Peter L; Juan Carlos, De La Torre; Derisi, Joseph L; Dietzgen, Ralf G; Dolnik, Olga; Dürrwald, Ralf; Dye, John M; Easton, Andrew J; Emonet, Sébastian; Formenty, Pierre; Fouchier, Ron A M; Ghedin, Elodie; Gonzalez, Jean-Paul; Harrach, Balázs; Hewson, Roger; Horie, Masayuki; Jiang, Dàohóng; Kobinger, Gary; Kondo, Hideki; Kropinski, Andrew M; Krupovic, Mart; Kurath, Gael; Lamb, Robert A; Leroy, Eric M; Lukashevich, Igor S; Maisner, Andrea; Mushegian, Arcady R; Netesov, Sergey V; Nowotny, Norbert; Patterson, Jean L; Payne, Susan L; PaWeska, Janusz T; Peters, Clarence J; Radoshitzky, Sheli R; Rima, Bertus K; Romanowski, Victor; Rubbenstroth, Dennis; Sabanadzovic, Sead; Sanfaçon, Hélène; Salvato, Maria S; Schwemmle, Martin; Smither, Sophie J; Stenglein, Mark D; Stone, David M; Takada, Ayato; Tesh, Robert B; Tomonaga, Keizo; Tordo, Noël; Towner, Jonathan S; Vasilakis, Nikos; Volchkov, Viktor E; Wahl-Jensen, Victoria; Walker, Peter J; Wang, Lin-Fa; Varsani, Arvind; Whitfield, Anna E; Zerbini, F Murilo; Kuhn, Jens H
2017-05-01
Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authors of this article expected when conceiving the experiment. [Arenaviridae; binomials; ICTV; International Committee on Taxonomy of Viruses; Mononegavirales; virus nomenclature; virus taxonomy.]. Published by Oxford University Press on behalf of Society of Systematic Biologists 2016. This work is written by a US Government employee and is in the public domain in the US.
The arcsine is asinine: the analysis of proportions in ecology.
Warton, David I; Hui, Francis K C
2011-01-01
The arcsine square root transformation has long been standard procedure when analyzing proportional data in ecology, with applications in data sets containing binomial and non-binomial response variables. Here, we argue that the arcsine transform should not be used in either circumstance. For binomial data, logistic regression has greater interpretability and higher power than analyses of transformed data. However, it is important to check the data for additional unexplained variation, i.e., overdispersion, and to account for it via the inclusion of random effects in the model if found. For non-binomial data, the arcsine transform is undesirable on the grounds of interpretability, and because it can produce nonsensical predictions. The logit transformation is proposed as an alternative approach to address these issues. Examples are presented in both cases to illustrate these advantages, comparing various methods of analyzing proportions including untransformed, arcsine- and logit-transformed linear models and logistic regression (with or without random effects). Simulations demonstrate that logistic regression usually provides a gain in power over other methods.
Alekseeva, N P; Alekseev, A O; Vakhtin, Iu B; Kravtsov, V Iu; Kuzovatov, S N; Skorikova, T I
2008-01-01
Distributions of nuclear morphology anomalies in transplantable rabdomiosarcoma RA-23 cell populations were investigated under effect of ionizing radiation from 0 to 45 Gy. Internuclear bridges, nuclear protrusions and dumbbell-shaped nuclei were accepted for morphological anomalies. Empirical distributions of the number of anomalies per 100 nuclei were used. The adequate model of reentrant binomial distribution has been found. The sum of binomial random variables with binomial number of summands has such distribution. Averages of these random variables were named, accordingly, internal and external average reentrant components. Their maximum likelihood estimations were received. Statistical properties of these estimations were investigated by means of statistical modeling. It has been received that at equally significant correlation between the radiation dose and the average of nuclear anomalies in cell populations after two-three cellular cycles from the moment of irradiation in vivo the irradiation doze significantly correlates with internal average reentrant component, and in remote descendants of cell transplants irradiated in vitro - with external one.
On Models for Binomial Data with Random Numbers of Trials
Comulada, W. Scott; Weiss, Robert E.
2010-01-01
Summary A binomial outcome is a count s of the number of successes out of the total number of independent trials n = s + f, where f is a count of the failures. The n are random variables not fixed by design in many studies. Joint modeling of (s, f) can provide additional insight into the science and into the probability π of success that cannot be directly incorporated by the logistic regression model. Observations where n = 0 are excluded from the binomial analysis yet may be important to understanding how π is influenced by covariates. Correlation between s and f may exist and be of direct interest. We propose Bayesian multivariate Poisson models for the bivariate response (s, f), correlated through random effects. We extend our models to the analysis of longitudinal and multivariate longitudinal binomial outcomes. Our methodology was motivated by two disparate examples, one from teratology and one from an HIV tertiary intervention study. PMID:17688514
The Difference Calculus and The NEgative Binomial Distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Kimiko o; Shenton, LR
2007-01-01
In a previous paper we state the dominant term in the third central moment of the maximum likelihood estimator k of the parameter k in the negative binomial probability function where the probability generating function is (p + 1 - pt){sup -k}. A partial sum of the series {Sigma}1/(k + x){sup 3} is involved, where x is a negative binomial random variate. In expectation this sum can only be found numerically using the computer. Here we give a simple definite integral in (0,1) for the generalized case. This means that now we do have a valid expression for {radical}{beta}{sub 11}(k)more » and {radical}{beta}{sub 11}(p). In addition we use the finite difference operator {Delta}, and E = 1 + {Delta} to set up formulas for low order moments. Other examples of the operators are quoted relating to the orthogonal set of polynomials associated with the negative binomial probability function used as a weight function.« less
Turi, Christina E; Murch, Susan J
2013-07-09
Ethnobotanical research and the study of plants used for rituals, ceremonies and to connect with the spirit world have led to the discovery of many novel psychoactive compounds such as nicotine, caffeine, and cocaine. In North America, spiritual and ceremonial uses of plants are well documented and can be accessed online via the University of Michigan's Native American Ethnobotany Database. The objective of the study was to compare Residual, Bayesian, Binomial and Imprecise Dirichlet Model (IDM) analyses of ritual, ceremonial and spiritual plants in Moerman's ethnobotanical database and to identify genera that may be good candidates for the discovery of novel psychoactive compounds. The database was queried with the following format "Family Name AND Ceremonial OR Spiritual" for 263 North American botanical families. Spiritual and ceremonial flora consisted of 86 families with 517 species belonging to 292 genera. Spiritual taxa were then grouped further into ceremonial medicines and items categories. Residual, Bayesian, Binomial and IDM analysis were performed to identify over and under-utilized families. The 4 statistical approaches were in good agreement when identifying under-utilized families but large families (>393 species) were underemphasized by Binomial, Bayesian and IDM approaches for over-utilization. Residual, Binomial, and IDM analysis identified similar families as over-utilized in the medium (92-392 species) and small (<92 species) classes. The families Apiaceae, Asteraceae, Ericacea, Pinaceae and Salicaceae were identified as significantly over-utilized as ceremonial medicines in medium and large sized families. Analysis of genera within the Apiaceae and Asteraceae suggest that the genus Ligusticum and Artemisia are good candidates for facilitating the discovery of novel psychoactive compounds. The 4 statistical approaches were not consistent in the selection of over-utilization of flora. Residual analysis revealed overall trends that were supported by Binomial analysis when separated into small, medium and large families. The Bayesian, Binomial and IDM approaches identified different genera as potentially important. Species belonging to the genus Artemisia and Ligusticum were most consistently identified and may be valuable in future studies of the ethnopharmacology. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
A powerful and flexible approach to the analysis of RNA sequence count data
Zhou, Yi-Hui; Xia, Kai; Wright, Fred A.
2011-01-01
Motivation: A number of penalization and shrinkage approaches have been proposed for the analysis of microarray gene expression data. Similar techniques are now routinely applied to RNA sequence transcriptional count data, although the value of such shrinkage has not been conclusively established. If penalization is desired, the explicit modeling of mean–variance relationships provides a flexible testing regimen that ‘borrows’ information across genes, while easily incorporating design effects and additional covariates. Results: We describe BBSeq, which incorporates two approaches: (i) a simple beta-binomial generalized linear model, which has not been extensively tested for RNA-Seq data and (ii) an extension of an expression mean–variance modeling approach to RNA-Seq data, involving modeling of the overdispersion as a function of the mean. Our approaches are flexible, allowing for general handling of discrete experimental factors and continuous covariates. We report comparisons with other alternate methods to handle RNA-Seq data. Although penalized methods have advantages for very small sample sizes, the beta-binomial generalized linear model, combined with simple outlier detection and testing approaches, appears to have favorable characteristics in power and flexibility. Availability: An R package containing examples and sample datasets is available at http://www.bios.unc.edu/research/genomic_software/BBSeq Contact: yzhou@bios.unc.edu; fwright@bios.unc.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21810900
Bakbergenuly, Ilyas; Morgenthaler, Stephan
2016-01-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group‐level studies or in meta‐analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log‐odds and arcsine transformations of the estimated probability p^, both for single‐group studies and in combining results from several groups or studies in meta‐analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta‐analysis and result in abysmal coverage of the combined effect for large K. We also propose bias‐correction for the arcsine transformation. Our simulations demonstrate that this bias‐correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta‐analyses of prevalence. PMID:27192062
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conover, W.J.; Cox, D.D.; Martz, H.F.
1997-12-01
When using parametric empirical Bayes estimation methods for estimating the binomial or Poisson parameter, the validity of the assumed beta or gamma conjugate prior distribution is an important diagnostic consideration. Chi-square goodness-of-fit tests of the beta or gamma prior hypothesis are developed for use when the binomial sample sizes or Poisson exposure times vary. Nine examples illustrate the application of the methods, using real data from such diverse applications as the loss of feedwater flow rates in nuclear power plants, the probability of failure to run on demand and the failure rates of the high pressure coolant injection systems atmore » US commercial boiling water reactors, the probability of failure to run on demand of emergency diesel generators in US commercial nuclear power plants, the rate of failure of aircraft air conditioners, baseball batting averages, the probability of testing positive for toxoplasmosis, and the probability of tumors in rats. The tests are easily applied in practice by means of corresponding Mathematica{reg_sign} computer programs which are provided.« less
Dorazio, R.M.; Royle, J. Andrew
2003-01-01
We develop a parameterization of the beta-binomial mixture that provides sensible inferences about the size of a closed population when probabilities of capture or detection vary among individuals. Three classes of mixture models (beta-binomial, logistic-normal, and latent-class) are fitted to recaptures of snowshoe hares for estimating abundance and to counts of bird species for estimating species richness. In both sets of data, rates of detection appear to vary more among individuals (animals or species) than among sampling occasions or locations. The estimates of population size and species richness are sensitive to model-specific assumptions about the latent distribution of individual rates of detection. We demonstrate using simulation experiments that conventional diagnostics for assessing model adequacy, such as deviance, cannot be relied on for selecting classes of mixture models that produce valid inferences about population size. Prior knowledge about sources of individual heterogeneity in detection rates, if available, should be used to help select among classes of mixture models that are to be used for inference.
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
Selecting Tools to Model Integer and Binomial Multiplication
ERIC Educational Resources Information Center
Pratt, Sarah Smitherman; Eddy, Colleen M.
2017-01-01
Mathematics teachers frequently provide concrete manipulatives to students during instruction; however, the rationale for using certain manipulatives in conjunction with concepts may not be explored. This article focuses on area models that are currently used in classrooms to provide concrete examples of integer and binomial multiplication. The…
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
Sequential extraction procedures are used to determine the solid-phase association in which elements of interest exist in soil and sediment matrices. Foundational work by Tessier et al. (1) has found widespread acceptance and has worked tolerably as an operational definition for...
Sequential Requests and the Problem of Message Sampling.
ERIC Educational Resources Information Center
Cantrill, James Gerard
S. Jackson and S. Jacobs's criticism of "single message" designs in communication research served as a framework for a study that examined the differences between various sequential request paradigms. The study sought to answer the following questions: (1) What were the most naturalistic request sequences assured to replicate…
Phosphorus concentrations in sequentially fractionated soil samples as affected by digestion methods
USDA-ARS?s Scientific Manuscript database
Sequential fractionation has been used for several decades for improving our understanding on the effects of agricultural practices and management on the lability and bioavailability of P in soil, manure, and other soil amendments. Nevertheless, there have been no reports on how manipulation of diff...
Bennett, Bradley C; Husby, Chad E
2008-03-28
Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.
Sequential Multiplex Analyte Capturing for Phosphoprotein Profiling*
Poetz, Oliver; Henzler, Tanja; Hartmann, Michael; Kazmaier, Cornelia; Templin, Markus F.; Herget, Thomas; Joos, Thomas O.
2010-01-01
Microarray-based sandwich immunoassays can simultaneously detect dozens of proteins. However, their use in quantifying large numbers of proteins is hampered by cross-reactivity and incompatibilities caused by the immunoassays themselves. Sequential multiplex analyte capturing addresses these problems by repeatedly probing the same sample with different sets of antibody-coated, magnetic suspension bead arrays. As a miniaturized immunoassay format, suspension bead array-based assays fulfill the criteria of the ambient analyte theory, and our experiments reveal that the analyte concentrations are not significantly changed. The value of sequential multiplex analyte capturing was demonstrated by probing tumor cell line lysates for the abundance of seven different receptor tyrosine kinases and their degree of phosphorylation and by measuring the complex phosphorylation pattern of the epidermal growth factor receptor in the same sample from the same cavity. PMID:20682761
dos Santos, Luciana B O; Infante, Carlos M C; Masini, Jorge C
2010-03-01
This work describes the development and optimization of a sequential injection method to automate the determination of paraquat by square-wave voltammetry employing a hanging mercury drop electrode. Automation by sequential injection enhanced the sampling throughput, improving the sensitivity and precision of the measurements as a consequence of the highly reproducible and efficient conditions of mass transport of the analyte toward the electrode surface. For instance, 212 analyses can be made per hour if the sample/standard solution is prepared off-line and the sequential injection system is used just to inject the solution towards the flow cell. In-line sample conditioning reduces the sampling frequency to 44 h(-1). Experiments were performed in 0.10 M NaCl, which was the carrier solution, using a frequency of 200 Hz, a pulse height of 25 mV, a potential step of 2 mV, and a flow rate of 100 µL s(-1). For a concentration range between 0.010 and 0.25 mg L(-1), the current (i(p), µA) read at the potential corresponding to the peak maximum fitted the following linear equation with the paraquat concentration (mg L(-1)): i(p) = (-20.5 ± 0.3)C (paraquat) - (0.02 ± 0.03). The limits of detection and quantification were 2.0 and 7.0 µg L(-1), respectively. The accuracy of the method was evaluated by recovery studies using spiked water samples that were also analyzed by molecular absorption spectrophotometry after reduction of paraquat with sodium dithionite in an alkaline medium. No evidence of statistically significant differences between the two methods was observed at the 95% confidence level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, B.; Li, W.; Wang, G.
2007-07-01
Sequential acid leaching was used to leach minerals and the trace elements they contain. One-step leaching uses concentrated nitric acid as solvent, while three-step leaching uses 5M hydrochloric acid, concentrated hydrofluoric acid, and concentrated hydrochloric acid as solvents. The sequential acid leaching by three-and one-step leach was also examined. The results showed that one-step leaching could leach over 80% of arsenic from coal samples, and also could leach mercury to a certain degree. During one-step leaching, little chromium is removed, but it is available to leach by three-step leaching; and during the sequential acid leaching by three and one-step leaching,more » almost 98% ash is leached. The result of acid leaching could also give detailed information on mode of occurrence of As, Cr, and Hg, which could be classified into: silicate association, pyrite association, organic association, and carbonates and sulfates association. Over half of chromium in the three coals is associated with organic matters and the rest is associated with silicates. The mode of occurrence of arsenic and mercury is mainly associated with different mineral matters depending on the coal samples studied.« less
Estimating the Parameters of the Beta-Binomial Distribution.
ERIC Educational Resources Information Center
Wilcox, Rand R.
1979-01-01
For some situations the beta-binomial distribution might be used to describe the marginal distribution of test scores for a particular population of examinees. Several different methods of approximating the maximum likelihood estimate were investigated, and it was found that the Newton-Raphson method should be used when it yields admissable…
Multilevel Models for Binary Data
ERIC Educational Resources Information Center
Powers, Daniel A.
2012-01-01
The methods and models for categorical data analysis cover considerable ground, ranging from regression-type models for binary and binomial data, count data, to ordered and unordered polytomous variables, as well as regression models that mix qualitative and continuous data. This article focuses on methods for binary or binomial data, which are…
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J
2015-01-27
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05-1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed.
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J.
2015-01-01
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05–1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed. PMID:25633600
Predictive accuracy of particle filtering in dynamic models supporting outbreak projections.
Safarishahrbijari, Anahita; Teyhouee, Aydin; Waldner, Cheryl; Liu, Juxin; Osgood, Nathaniel D
2017-09-26
While a new generation of computational statistics algorithms and availability of data streams raises the potential for recurrently regrounding dynamic models with incoming observations, the effectiveness of such arrangements can be highly subject to specifics of the configuration (e.g., frequency of sampling and representation of behaviour change), and there has been little attempt to identify effective configurations. Combining dynamic models with particle filtering, we explored a solution focusing on creating quickly formulated models regrounded automatically and recurrently as new data becomes available. Given a latent underlying case count, we assumed that observed incident case counts followed a negative binomial distribution. In accordance with the condensation algorithm, each such observation led to updating of particle weights. We evaluated the effectiveness of various particle filtering configurations against each other and against an approach without particle filtering according to the accuracy of the model in predicting future prevalence, given data to a certain point and a norm-based discrepancy metric. We examined the effectiveness of particle filtering under varying times between observations, negative binomial dispersion parameters, and rates with which the contact rate could evolve. We observed that more frequent observations of empirical data yielded super-linearly improved accuracy in model predictions. We further found that for the data studied here, the most favourable assumptions to make regarding the parameters associated with the negative binomial distribution and changes in contact rate were robust across observation frequency and the observation point in the outbreak. Combining dynamic models with particle filtering can perform well in projecting future evolution of an outbreak. Most importantly, the remarkable improvements in predictive accuracy resulting from more frequent sampling suggest that investments to achieve efficient reporting mechanisms may be more than paid back by improved planning capacity. The robustness of the results on particle filter configuration in this case study suggests that it may be possible to formulate effective standard guidelines and regularized approaches for such techniques in particular epidemiological contexts. Most importantly, the work tentatively suggests potential for health decision makers to secure strong guidance when anticipating outbreak evolution for emerging infectious diseases by combining even very rough models with particle filtering method.
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
Rigamonti, Ivo E; Brambilla, Carla; Colleoni, Emanuele; Jermini, Mauro; Trivellone, Valeria; Baumgärtner, Johann
2016-04-01
The paper deals with the study of the spatial distribution and the design of sampling plans for estimating nymph densities of the grape leafhopper Scaphoideus titanus Ball in vine plant canopies. In a reference vineyard sampled for model parameterization, leaf samples were repeatedly taken according to a multistage, stratified, random sampling procedure, and data were subjected to an ANOVA. There were no significant differences in density neither among the strata within the vineyard nor between the two strata with basal and apical leaves. The significant differences between densities on trunk and productive shoots led to the adoption of two-stage (leaves and plants) and three-stage (leaves, shoots, and plants) sampling plans for trunk shoots- and productive shoots-inhabiting individuals, respectively. The mean crowding to mean relationship used to analyze the nymphs spatial distribution revealed aggregated distributions. In both the enumerative and the sequential enumerative sampling plans, the number of leaves of trunk shoots, and of leaves and shoots of productive shoots, was kept constant while the number of plants varied. In additional vineyards data were collected and used to test the applicability of the distribution model and the sampling plans. The tests confirmed the applicability 1) of the mean crowding to mean regression model on the plant and leaf stages for representing trunk shoot-inhabiting distributions, and on the plant, shoot, and leaf stages for productive shoot-inhabiting nymphs, 2) of the enumerative sampling plan, and 3) of the sequential enumerative sampling plan. In general, sequential enumerative sampling was more cost efficient than enumerative sampling.
Possibility and Challenges of Conversion of Current Virus Species Names to Linnaean Binomials
Postler, Thomas S.; Clawson, Anna N.; Amarasinghe, Gaya K.; Basler, Christopher F.; Bavari, Sbina; Benkő, Mária; Blasdell, Kim R.; Briese, Thomas; Buchmeier, Michael J.; Bukreyev, Alexander; Calisher, Charles H.; Chandran, Kartik; Charrel, Rémi; Clegg, Christopher S.; Collins, Peter L.; Juan Carlos, De La Torre; Derisi, Joseph L.; Dietzgen, Ralf G.; Dolnik, Olga; Dürrwald, Ralf; Dye, John M.; Easton, Andrew J.; Emonet, Sébastian; Formenty, Pierre; Fouchier, Ron A. M.; Ghedin, Elodie; Gonzalez, Jean-Paul; Harrach, Balázs; Hewson, Roger; Horie, Masayuki; Jiāng, Dàohóng; Kobinger, Gary; Kondo, Hideki; Kropinski, Andrew M.; Krupovic, Mart; Kurath, Gael; Lamb, Robert A.; Leroy, Eric M.; Lukashevich, Igor S.; Maisner, Andrea; Mushegian, Arcady R.; Netesov, Sergey V.; Nowotny, Norbert; Patterson, Jean L.; Payne, Susan L.; PaWeska, Janusz T.; Peters, Clarence J.; Radoshitzky, Sheli R.; Rima, Bertus K.; Romanowski, Victor; Rubbenstroth, Dennis; Sabanadzovic, Sead; Sanfaçon, Hélène; Salvato, Maria S.; Schwemmle, Martin; Smither, Sophie J.; Stenglein, Mark D.; Stone, David M.; Takada, Ayato; Tesh, Robert B.; Tomonaga, Keizo; Tordo, Noël; Towner, Jonathan S.; Vasilakis, Nikos; Volchkov, Viktor E.; Wahl-Jensen, Victoria; Walker, Peter J.; Wang, Lin-Fa; Varsani, Arvind; Whitfield, Anna E.; Zerbini, F. Murilo; Kuhn, Jens H.
2017-01-01
Abstract Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authors of this article expected when conceiving the experiment. PMID:27798405
Radial basis function and its application in tourism management
NASA Astrophysics Data System (ADS)
Hu, Shan-Feng; Zhu, Hong-Bin; Zhao, Lei
2018-05-01
In this work, several applications and the performances of the radial basis function (RBF) are briefly reviewed at first. After that, the binomial function combined with three different RBFs including the multiquadric (MQ), inverse quadric (IQ) and inverse multiquadric (IMQ) distributions are adopted to model the tourism data of Huangshan in China. Simulation results showed that all the models match very well with the sample data. It is found that among the three models, the IMQ-RBF model is more suitable for forecasting the tourist flow.
Lee, JuHee; Park, Chang Gi; Choi, Moonki
2016-05-01
This study was conducted to identify risk factors that influence regular exercise among patients with Parkinson's disease in Korea. Parkinson's disease is prevalent in the elderly, and may lead to a sedentary lifestyle. Exercise can enhance physical and psychological health. However, patients with Parkinson's disease are less likely to exercise than are other populations due to physical disability. A secondary data analysis and cross-sectional descriptive study were conducted. A convenience sample of 106 patients with Parkinson's disease was recruited at an outpatient neurology clinic of a tertiary hospital in Korea. Demographic characteristics, disease-related characteristics (including disease duration and motor symptoms), self-efficacy for exercise, balance, and exercise level were investigated. Negative binomial regression and zero-inflated negative binomial regression for exercise count data were utilized to determine factors involved in exercise. The mean age of participants was 65.85 ± 8.77 years, and the mean duration of Parkinson's disease was 7.23 ± 6.02 years. Most participants indicated that they engaged in regular exercise (80.19%). Approximately half of participants exercised at least 5 days per week for 30 min, as recommended (51.9%). Motor symptoms were a significant predictor of exercise in the count model, and self-efficacy for exercise was a significant predictor of exercise in the zero model. Severity of motor symptoms was related to frequency of exercise. Self-efficacy contributed to the probability of exercise. Symptom management and improvement of self-efficacy for exercise are important to encourage regular exercise in patients with Parkinson's disease. Copyright © 2015 Elsevier Inc. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Effective Salmonella control in broilers is important from the standpoint of both consumer protection and industry viability. We investigated associations between Salmonella recovery from different sample types collected at sequential stages of one grow-out from the broiler flock and production env...
ERIC Educational Resources Information Center
Bain, Sherry K.
1993-01-01
Analysis of Kaufman Assessment Battery for Children (K-ABC) Sequential and Simultaneous Processing scores of 94 children (ages 6-12) with learning disabilities produced factor patterns generally supportive of the traditional K-ABC Mental Processing structure with the exception of Spatial Memory. The sample exhibited relative processing strengths…
Binomial Coefficients Modulo a Prime--A Visualization Approach to Undergraduate Research
ERIC Educational Resources Information Center
Bardzell, Michael; Poimenidou, Eirini
2011-01-01
In this article we present, as a case study, results of undergraduate research involving binomial coefficients modulo a prime "p." We will discuss how undergraduates were involved in the project, even with a minimal mathematical background beforehand. There are two main avenues of exploration described to discover these binomial…
Using the β-binomial distribution to characterize forest health
S.J. Zarnoch; R.L. Anderson; R.M. Sheffield
1995-01-01
The β-binomial distribution is suggested as a model for describing and analyzing the dichotomous data obtained from programs monitoring the health of forests in the United States. Maximum likelihood estimation of the parameters is given as well as asymptotic likelihood ratio tests. The procedure is illustrated with data on dogwood anthracnose infection (caused...
Integer Solutions of Binomial Coefficients
ERIC Educational Resources Information Center
Gilbertson, Nicholas J.
2016-01-01
A good formula is like a good story, rich in description, powerful in communication, and eye-opening to readers. The formula presented in this article for determining the coefficients of the binomial expansion of (x + y)n is one such "good read." The beauty of this formula is in its simplicity--both describing a quantitative situation…
Confidence Intervals for Weighted Composite Scores under the Compound Binomial Error Model
ERIC Educational Resources Information Center
Kim, Kyung Yong; Lee, Won-Chan
2018-01-01
Reporting confidence intervals with test scores helps test users make important decisions about examinees by providing information about the precision of test scores. Although a variety of estimation procedures based on the binomial error model are available for computing intervals for test scores, these procedures assume that items are randomly…
Cai, Qing; Lee, Jaeyoung; Eluru, Naveen; Abdel-Aty, Mohamed
2016-08-01
This study attempts to explore the viability of dual-state models (i.e., zero-inflated and hurdle models) for traffic analysis zones (TAZs) based pedestrian and bicycle crash frequency analysis. Additionally, spatial spillover effects are explored in the models by employing exogenous variables from neighboring zones. The dual-state models such as zero-inflated negative binomial and hurdle negative binomial models (with and without spatial effects) are compared with the conventional single-state model (i.e., negative binomial). The model comparison for pedestrian and bicycle crashes revealed that the models that considered observed spatial effects perform better than the models that did not consider the observed spatial effects. Across the models with spatial spillover effects, the dual-state models especially zero-inflated negative binomial model offered better performance compared to single-state models. Moreover, the model results clearly highlighted the importance of various traffic, roadway, and sociodemographic characteristics of the TAZ as well as neighboring TAZs on pedestrian and bicycle crash frequency. Copyright © 2016 Elsevier Ltd. All rights reserved.
Jiang, Honghua; Ni, Xiao; Huster, William; Heilmann, Cory
2015-01-01
Hypoglycemia has long been recognized as a major barrier to achieving normoglycemia with intensive diabetic therapies. It is a common safety concern for the diabetes patients. Therefore, it is important to apply appropriate statistical methods when analyzing hypoglycemia data. Here, we carried out bootstrap simulations to investigate the performance of the four commonly used statistical models (Poisson, negative binomial, analysis of covariance [ANCOVA], and rank ANCOVA) based on the data from a diabetes clinical trial. Zero-inflated Poisson (ZIP) model and zero-inflated negative binomial (ZINB) model were also evaluated. Simulation results showed that Poisson model inflated type I error, while negative binomial model was overly conservative. However, after adjusting for dispersion, both Poisson and negative binomial models yielded slightly inflated type I errors, which were close to the nominal level and reasonable power. Reasonable control of type I error was associated with ANCOVA model. Rank ANCOVA model was associated with the greatest power and with reasonable control of type I error. Inflated type I error was observed with ZIP and ZINB models.
Pricing American Asian options with higher moments in the underlying distribution
NASA Astrophysics Data System (ADS)
Lo, Keng-Hsin; Wang, Kehluh; Hsu, Ming-Feng
2009-01-01
We develop a modified Edgeworth binomial model with higher moment consideration for pricing American Asian options. With lognormal underlying distribution for benchmark comparison, our algorithm is as precise as that of Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] if the number of the time steps increases. If the underlying distribution displays negative skewness and leptokurtosis as often observed for stock index returns, our estimates can work better than those in Chalasani et al. [P. Chalasani, S. Jha, F. Egriboyun, A. Varikooty, A refined binomial lattice for pricing American Asian options, Rev. Derivatives Res. 3 (1) (1999) 85-105] and are very similar to the benchmarks in Hull and White [J. Hull, A. White, Efficient procedures for valuing European and American path-dependent options, J. Derivatives 1 (Fall) (1993) 21-31]. The numerical analysis shows that our modified Edgeworth binomial model can value American Asian options with greater accuracy and speed given higher moments in their underlying distribution.
Discrimination of numerical proportions: A comparison of binomial and Gaussian models.
Raidvee, Aire; Lember, Jüri; Allik, Jüri
2017-01-01
Observers discriminated the numerical proportion of two sets of elements (N = 9, 13, 33, and 65) that differed either by color or orientation. According to the standard Thurstonian approach, the accuracy of proportion discrimination is determined by irreducible noise in the nervous system that stochastically transforms the number of presented visual elements onto a continuum of psychological states representing numerosity. As an alternative to this customary approach, we propose a Thurstonian-binomial model, which assumes discrete perceptual states, each of which is associated with a certain visual element. It is shown that the probability β with which each visual element can be noticed and registered by the perceptual system can explain data of numerical proportion discrimination at least as well as the continuous Thurstonian-Gaussian model, and better, if the greater parsimony of the Thurstonian-binomial model is taken into account using AIC model selection. We conclude that Gaussian and binomial models represent two different fundamental principles-internal noise vs. using only a fraction of available information-which are both plausible descriptions of visual perception.
Zero adjusted models with applications to analysing helminths count data.
Chipeta, Michael G; Ngwira, Bagrey M; Simoonga, Christopher; Kazembe, Lawrence N
2014-11-27
It is common in public health and epidemiology that the outcome of interest is counts of events occurrence. Analysing these data using classical linear models is mostly inappropriate, even after transformation of outcome variables due to overdispersion. Zero-adjusted mixture count models such as zero-inflated and hurdle count models are applied to count data when over-dispersion and excess zeros exist. Main objective of the current paper is to apply such models to analyse risk factors associated with human helminths (S. haematobium) particularly in a case where there's a high proportion of zero counts. The data were collected during a community-based randomised control trial assessing the impact of mass drug administration (MDA) with praziquantel in Malawi, and a school-based cross sectional epidemiology survey in Zambia. Count data models including traditional (Poisson and negative binomial) models, zero modified models (zero inflated Poisson and zero inflated negative binomial) and hurdle models (Poisson logit hurdle and negative binomial logit hurdle) were fitted and compared. Using Akaike information criteria (AIC), the negative binomial logit hurdle (NBLH) and zero inflated negative binomial (ZINB) showed best performance in both datasets. With regards to zero count capturing, these models performed better than other models. This paper showed that zero modified NBLH and ZINB models are more appropriate methods for the analysis of data with excess zeros. The choice between the hurdle and zero-inflated models should be based on the aim and endpoints of the study.
Coleman, Marlize; Coleman, Michael; Mabuza, Aaron M; Kok, Gerdalize; Coetzee, Maureen; Durrheim, David N
2008-04-27
To evaluate the performance of a novel malaria outbreak identification system in the epidemic prone rural area of Mpumalanga Province, South Africa, for timely identification of malaria outbreaks and guiding integrated public health responses. Using five years of historical notification data, two binomial thresholds were determined for each primary health care facility in the highest malaria risk area of Mpumalanga province. Whenever the thresholds were exceeded at health facility level (tier 1), primary health care staff notified the malaria control programme, which then confirmed adequate stocks of malaria treatment to manage potential increased cases. The cases were followed up at household level to verify the likely source of infection. The binomial thresholds were reviewed at village/town level (tier 2) to determine whether additional response measures were required. In addition, an automated electronic outbreak identification system at town/village level (tier 2) was integrated into the case notification database (tier 3) to ensure that unexpected increases in case notification were not missed.The performance of these binomial outbreak thresholds was evaluated against other currently recommended thresholds using retrospective data. The acceptability of the system at primary health care level was evaluated through structured interviews with health facility staff. Eighty four percent of health facilities reported outbreaks within 24 hours (n = 95), 92% (n = 104) within 48 hours and 100% (n = 113) within 72 hours. Appropriate response to all malaria outbreaks (n = 113, tier 1, n = 46, tier 2) were achieved within 24 hours. The system was positively viewed by all health facility staff. When compared to other epidemiological systems for a specified 12 month outbreak season (June 2003 to July 2004) the binomial exact thresholds produced one false weekly outbreak, the C-sum 12 weekly outbreaks and the mean + 2 SD nine false weekly outbreaks. Exceeding the binomial level 1 threshold triggered an alert four weeks prior to an outbreak, but exceeding the binomial level 2 threshold identified an outbreak as it occurred. The malaria outbreak surveillance system using binomial thresholds achieved its primary goal of identifying outbreaks early facilitating appropriate local public health responses aimed at averting a possible large-scale epidemic in a low, and unstable, malaria transmission setting.
Smisc - A collection of miscellaneous functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landon Sego, PNNL
2015-08-31
A collection of functions for statistical computing and data manipulation. These include routines for rapidly aggregating heterogeneous matrices, manipulating file names, loading R objects, sourcing multiple R files, formatting datetimes, multi-core parallel computing, stream editing, specialized plotting, etc. Smisc-package A collection of miscellaneous functions allMissing Identifies missing rows or columns in a data frame or matrix as.numericSilent Silent wrapper for coercing a vector to numeric comboList Produces all possible combinations of a set of linear model predictors cumMax Computes the maximum of the vector up to the current index cumsumNA Computes the cummulative sum of a vector without propogating NAsmore » d2binom Probability functions for the sum of two independent binomials dataIn A flexible way to import data into R. dbb The Beta-Binomial Distribution df2list Row-wise conversion of a data frame to a list dfplapply Parallelized single row processing of a data frame dframeEquiv Examines the equivalence of two dataframes or matrices dkbinom Probability functions for the sum of k independent binomials factor2character Converts all factor variables in a dataframe to character variables findDepMat Identify linearly dependent rows or columns in a matrix formatDT Converts date or datetime strings into alternate formats getExtension Filename manipulations: remove the extension or path, extract the extension or path getPath Filename manipulations: remove the extension or path, extract the extension or path grabLast Filename manipulations: remove the extension or path, extract the extension or path ifelse1 Non-vectorized version of ifelse integ Simple numerical integration routine interactionPlot Two-way Interaction Plot with Error Bar linearMap Linear mapping of a numerical vector or scalar list2df Convert a list to a data frame loadObject Loads and returns the object(s) in an ".Rdata" file more Display the contents of a file to the R terminal movAvg2 Calculate the moving average using a 2-sided window openDevice Opens a graphics device based on the filename extension p2binom Probability functions for the sum of two independent binomials padZero Pad a vector of numbers with zeros parseJob Parses a collection of elements into (almost) equal sized groups pbb The Beta-Binomial Distribution pcbinom A continuous version of the binomial cdf pkbinom Probability functions for the sum of k independent binomials plapply Simple parallelization of lapply plotFun Plot one or more functions on a single plot PowerData An example of power data pvar Prints the name and value of one or more objects qbb The Beta-Binomial Distribution rbb And numerous others (space limits reporting).« less
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh
2009-01-01
This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.
Development of a syringe pump assisted dynamic headspace sampling technique for needle trap device.
Eom, In-Yong; Niri, Vadoud H; Pawliszyn, Janusz
2008-07-04
This paper describes a new approach that combines needle trap devices (NTDs) with a dynamic headspace sampling technique (purge and trap) using a bidirectional syringe pump. The needle trap device is a 22-G stainless steel needle 3.5-in. long packed with divinylbenzene sorbent particles. The same sized needle, without packing, was used for purging purposes. We chose an aqueous mixture of benzene, toluene, ethylbenzene, and p-xylene (BTEX) and developed a sequential purge and trap (SPNT) method, in which sampling (trapping) and purging cycles were performed sequentially by the use of syringe pump with different distribution channels. In this technique, a certain volume (1 mL) of headspace was sequentially sampled using the needle trap; afterwards, the same volume of air was purged into the solution at a high flow rate. The proposed technique showed an effective extraction compared to the continuous purge and trap technique, with a minimal dilution effect. Method evaluation was also performed by obtaining the calibration graphs for aqueous BTEX solutions in the concentration range of 1-250 ng/mL. The developed technique was compared to the headspace solid-phase microextraction method for the analysis of aqueous BTEX samples. Detection limits as low as 1 ng/mL were obtained for BTEX by NTD-SPNT.
Shahbi, M; Rajabpour, A
2017-08-01
Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.
ERIC Educational Resources Information Center
Economou, A.; Tzanavaras, P. D.; Themelis, D. G.
2005-01-01
The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
ERIC Educational Resources Information Center
Kaufman, Alan S.; Kamphaus, Randy W.
1984-01-01
The construct validity of the Sequential Processing, Simultaneous Processing and Achievement scales of the Kaufman Assessment Battery for Children was supported by factor-analytic investigations of a representative national stratified sample of 2,000 children. Correlations provided insight into the relationship of sequential/simultaneous…
I Remember You: Independence and the Binomial Model
ERIC Educational Resources Information Center
Levine, Douglas W.; Rockhill, Beverly
2006-01-01
We focus on the problem of ignoring statistical independence. A binomial experiment is used to determine whether judges could match, based on looks alone, dogs to their owners. The experimental design introduces dependencies such that the probability of a given judge correctly matching a dog and an owner changes from trial to trial. We show how…
Possibility and challenges of conversion of current virus species names to Linnaean binomials
Thomas, Postler; Clawson, Anna N.; Amarasinghe, Gaya K.; Basler, Christopher F.; Bavari, Sina; Benko, Maria; Blasdell, Kim R.; Briese, Thomas; Buchmeier, Michael J.; Bukreyev, Alexander; Calisher, Charles H.; Chandran, Kartik; Charrel, Remi; Clegg, Christopher S.; Collins, Peter L.; De la Torre, Juan Carlos; DeRisi, Joseph L.; Dietzgen, Ralf G.; Dolnik, Olga; Durrwald, Ralf; Dye, John M.; Easton, Andrew J.; Emonet, Sebastian; Formenty, Pierre; Fouchier, Ron A. M.; Ghedin, Elodie; Gonzalez, Jean-Paul; Harrach, Balazs; Hewson, Roger; Horie, Masayuki; Jiang, Daohong; Kobinger, Gary P.; Kondo, Hideki; Kropinski, Andrew; Krupovic, Mart; Kurath, Gael; Lamb, Robert A.; Leroy, Eric M.; Lukashevich, Igor S.; Maisner, Andrea; Mushegian, Arcady; Netesov, Sergey V.; Nowotny, Norbert; Patterson, Jean L.; Payne, Susan L.; Paweska, Janusz T.; Peters, C.J.; Radoshitzky, Sheli; Rima, Bertus K.; Romanowski, Victor; Rubbenstroth, Dennis; Sabanadzovic, Sead; Sanfacon, Helene; Salvato , Maria; Schwemmle, Martin; Smither, Sophie J.; Stenglein, Mark; Stone, D.M.; Takada , Ayato; Tesh, Robert B.; Tomonaga, Keizo; Tordo, N.; Towner, Jonathan S.; Vasilakis, Nikos; Volchkov, Victor E.; Jensen, Victoria; Walker, Peter J.; Wang, Lin-Fa; Varsani, Arvind; Whitfield , Anna E.; Zerbini, Francisco Murilo; Kuhn, Jens H.
2017-01-01
Botanical, mycological, zoological, and prokaryotic species names follow the Linnaean format, consisting of an italicized Latinized binomen with a capitalized genus name and a lower case species epithet (e.g., Homo sapiens). Virus species names, however, do not follow a uniform format, and, even when binomial, are not Linnaean in style. In this thought exercise, we attempted to convert all currently official names of species included in the virus family Arenaviridae and the virus order Mononegavirales to Linnaean binomials, and to identify and address associated challenges and concerns. Surprisingly, this endeavor was not as complicated or time-consuming as even the authors of this article expected when conceiving the experiment.
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-01-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037
Sequential injection spectrophotometric determination of oxybenzone in lipsticks.
Salvador, A; Chisvert, A; Camarasa, A; Pascual-Martí, M C; March, J G
2001-08-01
A sequential injection (SI) procedure for the spectrophotometric determination of oxybenzone in lipsticks is reported. The colorimetric reaction between nickel and oxybenzone was used. SI parameters such as sample solution volume, reagent solution volume, propulsion flow rate and reaction coil length were studied. The limit of detection was 3 microg ml(-1). The sensitivity was 0.0108+/-0.0002 ml microg(-1). The relative standard deviations of the results were between 6 and 12%. The real concentrations of samples and the values obtained by HPLC were comparable. Microwave sample pre-treatment allowed the extraction of oxybenzone with ethanol, thus avoiding the use of toxic organic solvents. Ethanol was also used as carrier in the SI system. Seventy-two injections per hour can be performed, which means a sample frequency of 24 h(-1) if three replicates are measured for each sample.
Song, Jeffery W; Small, Mitchell J; Casman, Elizabeth A
2017-12-15
Environmental DNA (eDNA) sampling is an emerging tool for monitoring the spread of aquatic invasive species. One confounding factor when interpreting eDNA sampling evidence is that eDNA can be present in the water in the absence of living target organisms, originating from excreta, dead tissue, boats, or sewage effluent, etc. In the Chicago Area Waterway System (CAWS), electric fish dispersal barriers were built to prevent non-native Asian carp species from invading Lake Michigan, and yet Asian carp eDNA has been detected above the barriers sporadically since 2009. In this paper the influence of stream flow characteristics in the CAWS on the probability of invasive Asian carp eDNA detection in the CAWS from 2009 to 2012 was examined. In the CAWS, the direction of stream flow is mostly away from Lake Michigan, though there are infrequent reversals in flow direction towards Lake Michigan during dry spells. We find that the flow reversal volume into the Lake has a statistically significant positive relationship with eDNA detection probability, while other covariates, like gage height, precipitation, season, water temperature, dissolved oxygen concentration, pH and chlorophyll concentration do not. This suggests that stream flow direction is highly influential on eDNA detection in the CAWS and should be considered when interpreting eDNA evidence. We also find that the beta-binomial regression model provides a stronger fit for eDNA detection probability compared to a binomial regression model. This paper provides a statistical modeling framework for interpreting eDNA sampling evidence and for evaluating covariates influencing eDNA detection. Copyright © 2017 Elsevier B.V. All rights reserved.
Choi, Seung Hoan; Labadorf, Adam T; Myers, Richard H; Lunetta, Kathryn L; Dupuis, Josée; DeStefano, Anita L
2017-02-06
Next generation sequencing provides a count of RNA molecules in the form of short reads, yielding discrete, often highly non-normally distributed gene expression measurements. Although Negative Binomial (NB) regression has been generally accepted in the analysis of RNA sequencing (RNA-Seq) data, its appropriateness has not been exhaustively evaluated. We explore logistic regression as an alternative method for RNA-Seq studies designed to compare cases and controls, where disease status is modeled as a function of RNA-Seq reads using simulated and Huntington disease data. We evaluate the effect of adjusting for covariates that have an unknown relationship with gene expression. Finally, we incorporate the data adaptive method in order to compare false positive rates. When the sample size is small or the expression levels of a gene are highly dispersed, the NB regression shows inflated Type-I error rates but the Classical logistic and Bayes logistic (BL) regressions are conservative. Firth's logistic (FL) regression performs well or is slightly conservative. Large sample size and low dispersion generally make Type-I error rates of all methods close to nominal alpha levels of 0.05 and 0.01. However, Type-I error rates are controlled after applying the data adaptive method. The NB, BL, and FL regressions gain increased power with large sample size, large log2 fold-change, and low dispersion. The FL regression has comparable power to NB regression. We conclude that implementing the data adaptive method appropriately controls Type-I error rates in RNA-Seq analysis. Firth's logistic regression provides a concise statistical inference process and reduces spurious associations from inaccurately estimated dispersion parameters in the negative binomial framework.
Ochiai, Nobuo; Tsunokawa, Jun; Sasamoto, Kikuo; Hoffmann, Andreas
2014-12-05
A novel multi-volatile method (MVM) using sequential dynamic headspace (DHS) sampling for analysis of aroma compounds in aqueous sample was developed. The MVM consists of three different DHS method parameters sets including choice of the replaceable adsorbent trap. The first DHS sampling at 25 °C using a carbon-based adsorbent trap targets very volatile solutes with high vapor pressure (>20 kPa). The second DHS sampling at 25 °C using the same type of carbon-based adsorbent trap targets volatile solutes with moderate vapor pressure (1-20 kPa). The third DHS sampling using a Tenax TA trap at 80 °C targets solutes with low vapor pressure (<1 kPa) and/or hydrophilic characteristics. After the 3 sequential DHS samplings using the same HS vial, the three traps are sequentially desorbed with thermal desorption in reverse order of the DHS sampling and the desorbed compounds are trapped and concentrated in a programmed temperature vaporizing (PTV) inlet and subsequently analyzed in a single GC-MS run. Recoveries of the 21 test aroma compounds for each DHS sampling and the combined MVM procedure were evaluated as a function of vapor pressure in the range of 0.000088-120 kPa. The MVM provided very good recoveries in the range of 91-111%. The method showed good linearity (r2>0.9910) and high sensitivity (limit of detection: 1.0-7.5 ng mL(-1)) even with MS scan mode. The feasibility and benefit of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed coffee. Ten potent aroma compounds from top-note to base-note (acetaldehyde, 2,3-butanedione, 4-ethyl guaiacol, furaneol, guaiacol, 3-methyl butanal, 2,3-pentanedione, 2,3,5-trimethyl pyrazine, vanillin, and 4-vinyl guaiacol) could be identified together with an additional 72 aroma compounds. Thirty compounds including 9 potent aroma compounds were quantified in the range of 74-4300 ng mL(-1) (RSD<10%, n=5). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Sequential growth factor application in bone marrow stromal cell ligament engineering.
Moreau, Jodie E; Chen, Jingsong; Horan, Rebecca L; Kaplan, David L; Altman, Gregory H
2005-01-01
In vitro bone marrow stromal cell (BMSC) growth may be enhanced through culture medium supplementation, mimicking the biochemical environment in which cells optimally proliferate and differentiate. We hypothesize that the sequential administration of growth factors to first proliferate and then differentiate BMSCs cultured on silk fiber matrices will support the enhanced development of ligament tissue in vitro. Confluent second passage (P2) BMSCs obtained from purified bone marrow aspirates were seeded on RGD-modified silk matrices. Seeded matrices were divided into three groups for 5 days of static culture, with medium supplement of basic fibroblast growth factor (B) (1 ng/mL), epidermal growth factor (E; 1 ng/mL), or growth factor-free control (C). After day 5, medium supplementation was changed to transforming growth factor-beta1 (T; 5 ng/mL) or C for an additional 9 days of culture. Real-time RT-PCR, SEM, MTT, histology, and ELISA for collagen type I of all sample groups were performed. Results indicated that BT supported the greatest cell ingrowth after 14 days of culture in addition to the greatest cumulative collagen type I expression measured by ELISA. Sequential growth factor application promoted significant increases in collagen type I transcript expression from day 5 of culture to day 14, for five of six groups tested. All T-supplemented samples surpassed their respective control samples in both cell ingrowth and collagen deposition. All samples supported spindle-shaped, fibroblast cell morphology, aligning with the direction of silk fibers. These findings indicate significant in vitro ligament development after only 14 days of culture when using a sequential growth factor approach.
Colyar, Jessica M; Eggett, Dennis L; Steele, Frost M; Dunn, Michael L; Ogden, Lynn V
2009-09-01
The relative sensitivity of side-by-side and sequential monadic consumer liking protocols was compared. In the side-by-side evaluation, all samples were presented at once and evaluated together 1 characteristic at a time. In the sequential monadic evaluation, 1 sample was presented and evaluated on all characteristics, then returned before panelists received and evaluated another sample. Evaluations were conducted on orange juice, frankfurters, canned chili, potato chips, and applesauce. Five commercial brands, having a broad quality range, were selected as samples for each product category to assure a wide array of consumer liking scores. Without their knowledge, panelists rated the same 5 retail brands by 1 protocol and then 3 wk later by the other protocol. For 3 of the products, both protocols yielded the same order of overall liking. Slight differences in order of overall liking for the other 2 products were not significant. Of the 50 pairwise overall liking comparisons, 44 were in agreement. The different results obtained by the 2 protocols in order of liking and significance of paired comparisons were due to the experimental variation and differences in sensitivity. Hedonic liking scores were subjected to statistical power analyses and used to calculate minimum number of panelists required to achieve varying degrees of sensitivity when using side-by-side and sequential monadic protocols. In most cases, the side-by-side protocol was more sensitive, thus providing the same information with fewer panelists. Side-by-side protocol was less sensitive in cases where sensory fatigue was a factor.
Filleron, Thomas; Gal, Jocelyn; Kramar, Andrew
2012-10-01
A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Sequential solvent extraction for forms of antimony in five selected coals
Qi, C.; Liu, Gaisheng; Kong, Y.; Chou, C.-L.; Wang, R.
2008-01-01
Abundance of antimony in bulk samples has been determined in five selected coals, three coals from Huaibei Coalfield, Anhui, China, and two from the Illinois Basin in the United States. The Sb abundance in these samples is in the range of 0.11-0.43 ??g/g. The forms of Sb in coals were studied by sequential solvent extraction. The six forms of Sb are water soluble, ion changeable, organic matter bound, carbonate bound, silicate bound, and sulfide bound. Results of sequential extraction show that silicate-bound Sb is the most abundant form in these coals. Silicate- plus sulfide-bound Sb accounts for more than half of the total Sb in all coals. Bituminous coals are higher in organic matterbound Sb than anthracite and natural coke, indicating that the Sb in the organic matter may be incorporated into silicate and sulfide minerals during metamorphism. ?? 2008 by The University of Chicago. All rights reserved.
von Gunten, Konstantin; Alam, Md Samrat; Hubmann, Magdalena; Ok, Yong Sik; Konhauser, Kurt O; Alessi, Daniel S
2017-07-01
A modified Community Bureau of Reference (CBR) sequential extraction method was tested to assess the composition of untreated pyrogenic carbon (biochar) and oil sands petroleum coke. Wood biochar samples were found to contain lower concentrations of metals, but had higher fractions of easily mobilized alkaline earth and transition metals. Sewage sludge biochar was determined to be less recalcitrant and had higher total metal concentrations, with most of the metals found in the more resilient extraction fractions (oxidizable, residual). Petroleum coke was the most stable material, with a similar metal distribution pattern as the sewage sludge biochar. The applied sequential extraction method represents a suitable technique to recover metals from these materials, and is a valuable tool in understanding the metal retaining and leaching capability of various biochar types and carbonaceous petroleum coke samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
del Río, Vanessa; Larrechi, M Soledad; Callao, M Pilar
2010-06-15
A new concept of flow titration is proposed and demonstrated for the determination of total acidity in plant oils and biodiesel. We use sequential injection analysis (SIA) with a diode array spectrophotometric detector linked to chemometric tools such as multivariate curve resolution-alternating least squares (MCR-ALS). This system is based on the evolution of the basic specie of an acid-base indicator, alizarine, when it comes into contact with a sample that contains free fatty acids. The gradual pH change in the reactor coil due to diffusion and reaction phenomenona allows the sequential appearance of both species of the indicator in the detector coil, recording a data matrix for each sample. The SIA-MCR-ALS method helps to reduce the amounts of sample, the reagents and the time consumed. Each determination consumes 0.413ml of sample, 0.250ml of indicator and 3ml of carrier (ethanol) and generates 3.333ml of waste. The frequency of the analysis is high (12 samples h(-1) including all steps, i.e., cleaning, preparing and analysing). The utilized reagents are of common use in the laboratory and it is not necessary to use the reagents of perfect known concentration. The method was applied to determine acidity in plant oil and biodiesel samples. Results obtained by the proposed method compare well with those obtained by the official European Community method that is time consuming and uses large amounts of organic solvents.
Bakbergenuly, Ilyas; Kulinskaya, Elena; Morgenthaler, Stephan
2016-07-01
We study bias arising as a result of nonlinear transformations of random variables in random or mixed effects models and its effect on inference in group-level studies or in meta-analysis. The findings are illustrated on the example of overdispersed binomial distributions, where we demonstrate considerable biases arising from standard log-odds and arcsine transformations of the estimated probability p̂, both for single-group studies and in combining results from several groups or studies in meta-analysis. Our simulations confirm that these biases are linear in ρ, for small values of ρ, the intracluster correlation coefficient. These biases do not depend on the sample sizes or the number of studies K in a meta-analysis and result in abysmal coverage of the combined effect for large K. We also propose bias-correction for the arcsine transformation. Our simulations demonstrate that this bias-correction works well for small values of the intraclass correlation. The methods are applied to two examples of meta-analyses of prevalence. © 2016 The Authors. Biometrical Journal Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Sheu, Mei-Ling; Hu, Teh-Wei; Keeler, Theodore E; Ong, Michael; Sung, Hai-Yen
2004-08-01
The objective of this paper is to determine the price sensitivity of smokers in their consumption of cigarettes, using evidence from a major increase in California cigarette prices due to Proposition 10 and the Tobacco Settlement. The study sample consists of individual survey data from Behavioral Risk Factor Survey (BRFS) and price data from the Bureau of Labor Statistics between 1996 and 1999. A zero-inflated negative binomial (ZINB) regression model was applied for the statistical analysis. The statistical model showed that price did not have an effect on reducing the estimated prevalence of smoking. However, it indicated that among smokers the price elasticity was at the level of -0.46 and statistically significant. Since smoking prevalence is significantly lower than it was a decade ago, price increases are becoming less effective as an inducement for hard-core smokers to quit, although they may respond by decreasing consumption. For those who only smoke occasionally (many of them being young adults) price increases alone may not be an effective inducement to quit smoking. Additional underlying behavioral factors need to be identified so that more effective anti-smoking strategies can be developed.
Raw and Central Moments of Binomial Random Variables via Stirling Numbers
ERIC Educational Resources Information Center
Griffiths, Martin
2013-01-01
We consider here the problem of calculating the moments of binomial random variables. It is shown how formulae for both the raw and the central moments of such random variables may be obtained in a recursive manner utilizing Stirling numbers of the first kind. Suggestions are also provided as to how students might be encouraged to explore this…
Justin S. Crotteau; Martin W. Ritchie; J. Morgan Varner
2014-01-01
Many western USA fire regimes are typified by mixed-severity fire, which compounds the variability inherent to natural regeneration densities in associated forests. Tree regeneration data are often discrete and nonnegative; accordingly, we fit a series of Poisson and negative binomial variation models to conifer seedling counts across four distinct burn severities and...
Yelland, Lisa N; Salter, Amy B; Ryan, Philip
2011-10-15
Modified Poisson regression, which combines a log Poisson regression model with robust variance estimation, is a useful alternative to log binomial regression for estimating relative risks. Previous studies have shown both analytically and by simulation that modified Poisson regression is appropriate for independent prospective data. This method is often applied to clustered prospective data, despite a lack of evidence to support its use in this setting. The purpose of this article is to evaluate the performance of the modified Poisson regression approach for estimating relative risks from clustered prospective data, by using generalized estimating equations to account for clustering. A simulation study is conducted to compare log binomial regression and modified Poisson regression for analyzing clustered data from intervention and observational studies. Both methods generally perform well in terms of bias, type I error, and coverage. Unlike log binomial regression, modified Poisson regression is not prone to convergence problems. The methods are contrasted by using example data sets from 2 large studies. The results presented in this article support the use of modified Poisson regression as an alternative to log binomial regression for analyzing clustered prospective data when clustering is taken into account by using generalized estimating equations.
Rosende, Maria; Savonina, Elena Yu; Fedotov, Petr S; Miró, Manuel; Cerdà, Víctor; Wennrich, Rainer
2009-09-15
Dynamic fractionation has been recognized as an appealing alternative to conventional equilibrium-based sequential extraction procedures (SEPs) for partitioning of trace elements (TE) in environmental solid samples. This paper reports the first attempt for harmonization of flow-through dynamic fractionation using two novel methods, the so-called sequential injection microcolumn (SIMC) extraction and rotating coiled column (RCC) extraction. In SIMC extraction, a column packed with the solid sample is clustered in a sequential injection system, while in RCC, the particulate matter is retained under the action of centrifugal forces. In both methods, the leachants are continuously pumped through the solid substrates by the use of either peristaltic or syringe pumps. A five-step SEP was selected for partitioning of Cu, Pb and Zn in water soluble/exchangeable, acid-soluble, easily reducible, easily oxidizable and moderately reducible fractions from 0.2 to 0.5 g samples at an extractant flow rate of 1.0 mL min(-1) prior to leachate analysis by inductively coupled plasma-atomic emission spectrometry. Similarities and discrepancies between both dynamic approaches were ascertained by fractionation of TE in certified reference materials, namely, SRM 2711 Montana Soil and GBW 07311 sediment, and two real soil samples as well. Notwithstanding the different extraction conditions set by both methods, similar trends of metal distribution were in generally found. The most critical parameters for reliable assessment of mobilizable pools of TE in worse-case scenarios are the size-distribution of sample particles, the density of particles, the content of organic matter and the concentration of major elements. For reference materials and a soil rich in organic matter, the extraction in RCC results in slightly higher recoveries of environmentally relevant fractions of TE, whereas SIMC leaching is more effective for calcareous soils.
ERIC Educational Resources Information Center
Chen, Chin-Chih; McComas, Jennifer J.; Hartman, Ellie; Symons, Frank J.
2011-01-01
Research Findings: In early childhood education, the social ecology of the child is considered critical for healthy behavioral development. There is, however, relatively little information based on directly observing what children do that describes the moment-by-moment (i.e., sequential) relation between physical aggression and peer rejection acts…
Ku, Yixuan; Zhao, Di; Hao, Ning; Hu, Yi; Bodner, Mark; Zhou, Yong-Di
2015-01-01
Both monkey neurophysiological and human EEG studies have shown that association cortices, as well as primary sensory cortical areas, play an essential role in sequential neural processes underlying cross-modal working memory. The present study aims to further examine causal and sequential roles of the primary sensory cortex and association cortex in cross-modal working memory. Individual MRI-based single-pulse transcranial magnetic stimulation (spTMS) was applied to bilateral primary somatosensory cortices (SI) and the contralateral posterior parietal cortex (PPC), while participants were performing a tactile-visual cross-modal delayed matching-to-sample task. Time points of spTMS were 300 ms, 600 ms, 900 ms after the onset of the tactile sample stimulus in the task. The accuracy of task performance and reaction time were significantly impaired when spTMS was applied to the contralateral SI at 300 ms. Significant impairment on performance accuracy was also observed when the contralateral PPC was stimulated at 600 ms. SI and PPC play sequential and distinct roles in neural processes of cross-modal associations and working memory. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tfaily, Malak M.; Chu, Rosalie K.; Toyoda, Jason
A vast number of organic compounds are present in soil organic matter (SOM) and play an important role in the terrestrial carbon cycle, facilitate interactions between organisms, and represent a sink for atmospheric CO2. The diversity of different SOM compounds and their molecular characteristics is a function of the organic source material and biogeochemical history. By understanding how SOM composition changes with sources and the processes by which it is biogeochemically altered in different terrestrial ecosystems, it may be possible to predict nutrient and carbon cycling, response to system perturbations, and impact of climate change will have on SOM composition.more » In this study, a sequential chemical extraction procedure was developed to reveal the diversity of organic matter (OM) in different ecosystems and was compared to the previously published protocol using parallel solvent extraction (PSE). We compared six extraction methods using three sample types, peat soil, spruce forest soil and river sediment, so as to select the best method for extracting a representative fraction of organic matter from soils and sediments from a wide range of ecosystems. We estimated the extraction yield of dissolved organic carbon (DOC) by total organic carbon analysis, and measured the composition of extracted OM using high resolution mass spectrometry. This study showed that OM composition depends primarily on soil and sediment characteristics. Two sequential extraction protocols, progressing from polar to non-polar solvents, were found to provide the highest number and diversity of organic compounds extracted from the soil and sediments. Water (H2O) is the first solvent used for both protocols followed by either co-extraction with methanol-chloroform (MeOH-CHCl3) mixture, or acetonitrile (ACN) and CHCl3 sequentially. The sequential extraction protocol developed in this study offers improved sensitivity, and requires less sample compared to the PSE workflow where a new sample is used for each solvent type. Furthermore, a comparison of SOM composition from the different sample types revealed that our sequential protocol allows for ecosystem comparisons based on the diversity of compounds present, which in turn could provide new insights about source and processing of organic compounds in different soil and sediment types.« less
Tfaily, Malak M; Chu, Rosalie K; Toyoda, Jason; Tolić, Nikola; Robinson, Errol W; Paša-Tolić, Ljiljana; Hess, Nancy J
2017-06-15
A vast number of organic compounds are present in soil organic matter (SOM) and play an important role in the terrestrial carbon cycle, facilitate interactions between organisms, and represent a sink for atmospheric CO 2 . The diversity of different SOM compounds and their molecular characteristics is a function of the organic source material and biogeochemical history. By understanding how SOM composition changes with sources and the processes by which it is biogeochemically altered in different terrestrial ecosystems, it may be possible to predict nutrient and carbon cycling, response to system perturbations, and impact of climate change will have on SOM composition. In this study, a sequential chemical extraction procedure was developed to reveal the diversity of organic matter (OM) in different ecosystems and was compared to the previously published protocol using parallel solvent extraction (PSE). We compared six extraction methods using three sample types, peat soil, spruce forest soil and river sediment, so as to select the best method for extracting a representative fraction of organic matter from soils and sediments from a wide range of ecosystems. We estimated the extraction yield of dissolved organic carbon (DOC) by total organic carbon analysis, and measured the composition of extracted OM using high resolution mass spectrometry. This study showed that OM composition depends primarily on soil and sediment characteristics. Two sequential extraction protocols, progressing from polar to non-polar solvents, were found to provide the highest number and diversity of organic compounds extracted from the soil and sediments. Water (H 2 O) is the first solvent used for both protocols followed by either co-extraction with methanol-chloroform (MeOH-CHCl 3 ) mixture, or acetonitrile (ACN) and CHCl 3 sequentially. The sequential extraction protocol developed in this study offers improved sensitivity, and requires less sample compared to the PSE workflow where a new sample is used for each solvent type. Furthermore, a comparison of SOM composition from the different sample types revealed that our sequential protocol allows for ecosystem comparisons based on the diversity of compounds present, which in turn could provide new insights about source and processing of organic compounds in different soil and sediment types. Copyright © 2017 Elsevier B.V. All rights reserved.
Electronic and Vibrational Spectra of InP Quantum Dots Formed by Sequential Ion Implantation
NASA Technical Reports Server (NTRS)
Hall, C.; Mu, R.; Tung, Y. S.; Ueda, A.; Henderson, D. O.; White, C. W.
1997-01-01
We have performed sequential ion implantation of indium and phosphorus into silica combined with controlled thermal annealing to fabricate InP quantum dots in a dielectric host. Electronic and vibrational spectra were measured for the as-implanted and annealed samples. The annealed samples show a peak in the infrared spectra near 320/cm which is attributed to a surface phonon mode and is in good agreement with the value calculated from Frolich's theory of surface phonon polaritons. The electronic spectra show the development of a band near 390 nm that is attributed to quantum confined InP.
Spatial-dependence recurrence sample entropy
NASA Astrophysics Data System (ADS)
Pham, Tuan D.; Yan, Hong
2018-03-01
Measuring complexity in terms of the predictability of time series is a major area of research in science and engineering, and its applications are spreading throughout many scientific disciplines, where the analysis of physiological signals is perhaps the most widely reported in literature. Sample entropy is a popular measure for quantifying signal irregularity. However, the sample entropy does not take sequential information, which is inherently useful, into its calculation of sample similarity. Here, we develop a method that is based on the mathematical principle of the sample entropy and enables the capture of sequential information of a time series in the context of spatial dependence provided by the binary-level co-occurrence matrix of a recurrence plot. Experimental results on time-series data of the Lorenz system, physiological signals of gait maturation in healthy children, and gait dynamics in Huntington's disease show the potential of the proposed method.
Russo, Elizabeth T; Biggerstaff, Gwen; Hoekstra, R Michael; Meyer, Stephanie; Patel, Nehal; Miller, Benjamin; Quick, Rob
2013-02-01
An outbreak of Salmonella enterica serotype Agona infections associated with nationwide distribution of cereal from Company X was identified in April 2008. This outbreak was detected using PulseNet, the national molecular subtyping network for foodborne disease surveillance, which coincided with Company X's voluntary recall of unsweetened puffed rice and wheat cereals after routine product sampling yielded Salmonella Agona. A case patient was defined as being infected with the outbreak strain of Salmonella Agona, with illness onset from 1 January through 1 July 2008. Case patients were interviewed using a standard questionnaire, and the proportion of ill persons who reported eating Company X puffed rice cereal was compared with Company X's market share data using binomial testing. The Minnesota Department of Agriculture inspected the cereal production facility and collected both product and environmental swab samples. Routine surveillance identified 33 case patients in 17 states. Of 32 patients interviewed, 24 (83%) reported eating Company X puffed rice cereal. Company X puffed rice cereal represented 0.063% of the total ready-to-eat dry cereal market share in the United States at the time of the investigation. Binomial testing suggested that the proportion of exposed case patients would not likely occur by chance (P < 0.0001). Of 17 cereal samples collected from case patient homes for laboratory testing, 2 (12%) yielded Salmonella Agona indistinguishable from the outbreak strain. Twelve environmental swabs and nine product samples from the cereal plant yielded the outbreak strain of Salmonella Agona. Company X cereal was implicated in a similar outbreak of Salmonella Agona infection in 1998 with the same outbreak strain linked to the same production facility. We hypothesize that a recent construction project at this facility created an open wall near the cereal production area allowing reintroduction of Salmonella Agona into the product, highlighting the resilience of Salmonella in dry food production environments.
Lawson, Chris A
2014-07-01
Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.
2014-04-01
with the binomial distribution for a particular dataset. This technique is more commonly known as the Langer, Bar-on and Miller ( LBM ) method [22,23...distribution unlimited. Using the LBM method, the frequency distribution plot for a dataset corresponding to a phase separated system, exhibiting a split peak...estimated parameters (namely μ1, μ2, σ, fγ’ and fγ) obtained from the LBM plots in Fig. 5 are summarized in Table 3. The EWQ sample does not exhibit any
Two-step sequential pretreatment for the enhanced enzymatic hydrolysis of coffee spent waste.
Ravindran, Rajeev; Jaiswal, Swarna; Abu-Ghannam, Nissreen; Jaiswal, Amit K
2017-09-01
In the present study, eight different pretreatments of varying nature (physical, chemical and physico-chemical) followed by a sequential, combinatorial pretreatment strategy was applied to spent coffee waste to attain maximum sugar yield. Pretreated samples were analysed for total reducing sugar, individual sugars and generation of inhibitory compounds such as furfural and hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity. Native spent coffee waste was high in hemicellulose content. Galactose was found to be the predominant sugar in spent coffee waste. Results showed that sequential pretreatment yielded 350.12mg of reducing sugar/g of substrate, which was 1.7-fold higher than in native spent coffee waste (203.4mg/g of substrate). Furthermore, extensive delignification was achieved using sequential pretreatment strategy. XRD, FTIR, and DSC profiles of the pretreated substrates were studied to analyse the various changes incurred in sequentially pretreated spent coffee waste as opposed to native spent coffee waste. Copyright © 2017 Elsevier Ltd. All rights reserved.
Use of the negative binomial-truncated Poisson distribution in thunderstorm prediction
NASA Technical Reports Server (NTRS)
Cohen, A. C.
1971-01-01
A probability model is presented for the distribution of thunderstorms over a small area given that thunderstorm events (1 or more thunderstorms) are occurring over a larger area. The model incorporates the negative binomial and truncated Poisson distributions. Probability tables for Cape Kennedy for spring, summer, and fall months and seasons are presented. The computer program used to compute these probabilities is appended.
Bianca N.I. Eskelson; Hailemariam Temesgen; Tara M. Barrett
2009-01-01
Cavity tree and snag abundance data are highly variable and contain many zero observations. We predict cavity tree and snag abundance from variables that are readily available from forest cover maps or remotely sensed data using negative binomial (NB), zero-inflated NB, and zero-altered NB (ZANB) regression models as well as nearest neighbor (NN) imputation methods....
ERIC Educational Resources Information Center
Abrahamson, Dor
2009-01-01
This article reports on a case study from a design-based research project that investigated how students make sense of the disciplinary tools they are taught to use, and specifically, what personal, interpersonal, and material resources support this process. The probability topic of binomial distribution was selected due to robust documentation of…
EVALUATION OF VAPOR EQUILIBRATION AND IMPACT OF PURGE VOLUME ON SOIL-GAS SAMPLING RESULTS
Sequential sampling was utilized at the Raymark Superfund site to evaluate attainment of vapor equilibration and the impact of purge volume on soil-gas sample results. A simple mass-balance equation indicates that removal of three to five internal volumes of a sample system shou...
Vichapong, Jitlada; Burakham, Rodjana; Srijaranai, Supalax; Grudpan, Kate
2011-07-01
A sequential injection-bead injection-lab-on-valve system was hyphenated to HPLC for online renewable micro-solid-phase extraction of carbamate insecticides. The carbamates studied were isoprocarb, methomyl, carbaryl, carbofuran, methiocarb, promecarb, and propoxur. LiChroprep(®) RP-18 beads (25-40 μm) were employed as renewable sorbent packing in a microcolumn situated inside the LOV platform mounted above the multiposition valve of the sequential injection system. The analytes sorbed by the microcolumn were eluted using 80% acetonitrile in 0.1% acetic acid before online introduction to the HPLC system. Separation was performed on an Atlantis C-18 column (4.6 × 150 mm, 5 μm) utilizing gradient elution with a flow rate of 1.0 mL/min and a detection wavelength at 270 nm. The sequential injection system offers the means of performing automated handling of sample preconcentration and matrix removal. The enrichment factors ranged between 20 and 125, leading to limits of detection (LODs) in the range of 1-20 μg/L. Good reproducibility was obtained with relative standard deviations of <0.7 and 5.4% for retention time and peak area, respectively. The developed method has been successfully applied to the determination of carbamate residues in fruit, vegetable, and water samples. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gaudry, Adam J; Nai, Yi Heng; Guijt, Rosanne M; Breadmore, Michael C
2014-04-01
A dual-channel sequential injection microchip capillary electrophoresis system with pressure-driven injection is demonstrated for simultaneous separations of anions and cations from a single sample. The poly(methyl methacrylate) (PMMA) microchips feature integral in-plane contactless conductivity detection electrodes. A novel, hydrodynamic "split-injection" method utilizes background electrolyte (BGE) sheathing to gate the sample flows, while control over the injection volume is achieved by balancing hydrodynamic resistances using external hydrodynamic resistors. Injection is realized by a unique flow-through interface, allowing for automated, continuous sampling for sequential injection analysis by microchip electrophoresis. The developed system was very robust, with individual microchips used for up to 2000 analyses with lifetimes limited by irreversible blockages of the microchannels. The unique dual-channel geometry was demonstrated by the simultaneous separation of three cations and three anions in individual microchannels in under 40 s with limits of detection (LODs) ranging from 1.5 to 24 μM. From a series of 100 sequential injections the %RSDs were determined for every fifth run, resulting in %RSDs for migration times that ranged from 0.3 to 0.7 (n = 20) and 2.3 to 4.5 for peak area (n = 20). This system offers low LODs and a high degree of reproducibility and robustness while the hydrodynamic injection eliminates electrokinetic bias during injection, making it attractive for a wide range of rapid, sensitive, and quantitative online analytical applications.
A novel approach for small sample size family-based association studies: sequential tests.
Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan
2011-08-01
In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.
Automatic sequential fluid handling with multilayer microfluidic sample isolated pumping
Liu, Jixiao; Fu, Hai; Yang, Tianhang; Li, Songjing
2015-01-01
To sequentially handle fluids is of great significance in quantitative biology, analytical chemistry, and bioassays. However, the technological options are limited when building such microfluidic sequential processing systems, and one of the encountered challenges is the need for reliable, efficient, and mass-production available microfluidic pumping methods. Herein, we present a bubble-free and pumping-control unified liquid handling method that is compatible with large-scale manufacture, termed multilayer microfluidic sample isolated pumping (mμSIP). The core part of the mμSIP is the selective permeable membrane that isolates the fluidic layer from the pneumatic layer. The air diffusion from the fluidic channel network into the degassing pneumatic channel network leads to fluidic channel pressure variation, which further results in consistent bubble-free liquid pumping into the channels and the dead-end chambers. We characterize the mμSIP by comparing the fluidic actuation processes with different parameters and a flow rate range of 0.013 μl/s to 0.097 μl/s is observed in the experiments. As the proof of concept, we demonstrate an automatic sequential fluid handling system aiming at digital assays and immunoassays, which further proves the unified pumping-control and suggests that the mμSIP is suitable for functional microfluidic assays with minimal operations. We believe that the mμSIP technology and demonstrated automatic sequential fluid handling system would enrich the microfluidic toolbox and benefit further inventions. PMID:26487904
Li, Bo; Li, Hao; Dong, Li; Huang, Guofu
2017-11-01
In this study, we sought to investigate the feasibility of fast carotid artery MR angiography (MRA) by combining three-dimensional time-of-flight (3D TOF) with compressed sensing method (CS-3D TOF). A pseudo-sequential phase encoding order was developed for CS-3D TOF to generate hyper-intense vessel and suppress background tissues in under-sampled 3D k-space. Seven healthy volunteers and one patient with carotid artery stenosis were recruited for this study. Five sequential CS-3D TOF scans were implemented at 1, 2, 3, 4 and 5-fold acceleration factors for carotid artery MRA. Blood signal-to-tissue ratio (BTR) values for fully-sampled and under-sampled acquisitions were calculated and compared in seven subjects. Blood area (BA) was measured and compared between fully sampled acquisition and each under-sampled one. There were no significant differences between the fully-sampled dataset and each under-sampled in BTR comparisons (P>0.05 for all comparisons). The carotid vessel BAs measured from the images of CS-3D TOF sequences with 2, 3, 4 and 5-fold acceleration scans were all highly correlated with that of the fully-sampled acquisition. The contrast between blood vessels and background tissues of the images at 2 to 5-fold acceleration is comparable to that of fully sampled images. The images at 2× to 5× exhibit the comparable lumen definition to the corresponding images at 1×. By combining the pseudo-sequential phase encoding order, CS reconstruction, and 3D TOF sequence, this technique provides excellent visualizations for carotid vessel and calcifications in a short scan time. It has the potential to be integrated into current multiple blood contrast imaging protocol. Copyright © 2017. Published by Elsevier Inc.
Nagelkerke, Nico; Fidler, Vaclav
2015-01-01
The problem of discrimination and classification is central to much of epidemiology. Here we consider the estimation of a logistic regression/discrimination function from training samples, when one of the training samples is subject to misclassification or mislabeling, e.g. diseased individuals are incorrectly classified/labeled as healthy controls. We show that this leads to zero-inflated binomial model with a defective logistic regression or discrimination function, whose parameters can be estimated using standard statistical methods such as maximum likelihood. These parameters can be used to estimate the probability of true group membership among those, possibly erroneously, classified as controls. Two examples are analyzed and discussed. A simulation study explores properties of the maximum likelihood parameter estimates and the estimates of the number of mislabeled observations.
Structural drift: the population dynamics of sequential learning.
Crutchfield, James P; Whalen, Sean
2012-01-01
We introduce a theory of sequential causal inference in which learners in a chain estimate a structural model from their upstream "teacher" and then pass samples from the model to their downstream "student". It extends the population dynamics of genetic drift, recasting Kimura's selectively neutral theory as a special case of a generalized drift process using structured populations with memory. We examine the diffusion and fixation properties of several drift processes and propose applications to learning, inference, and evolution. We also demonstrate how the organization of drift process space controls fidelity, facilitates innovations, and leads to information loss in sequential learning with and without memory.
NASA Astrophysics Data System (ADS)
Oliveira, J. M.; Carvalho, F. P.
2006-01-01
A sequential extraction technique was developed and tested for common naturally-occurring radionuclides. This technique allows the extraction and purification of uranium, thorium, radium, lead, and polonium radionuclides from the same sample. Environmental materials such as water, soil, and biological samples can be analyzed for those radionuclides without matrix interferences in the quality of radioelement purification and in the radiochemical yield. The use of isotopic tracers (232U, 229Th, 224Ra, 209Po, and stable lead carrier) added to the sample in the beginning of the chemical procedure, enables an accurate control of the radiochemical yield for each radioelement. The ion extraction procedure, applied after either complete dissolution of the solid sample with mineral acids or co-precipitation of dissolved radionuclide with MnO2 for aqueous samples, includes the use of commercially available pre-packed columns from Eichrom® and ion exchange columns packed with Bio-Rad resins, in altogether three chromatography columns. All radioactive elements but one are purified and electroplated on stainless steel discs. Polonium is spontaneously plated on a silver disc. The discs are measured using high resolution silicon surface barrier detectors. 210Pb, a beta emitter, can be measured either through the beta emission of 210Bi, or stored for a few months and determined by alpha spectrometry through the in-growth of 210Po. This sequential extraction chromatography technique was tested and validated with the analysis of certified reference materials from the IAEA. Reproducibility was tested through repeated analysis of the same homogeneous material (water sample).
Mynatt, Robert; Hale, Shane A; Gill, Ruth M; Plontke, Stefan K; Salt, Alec N
2006-06-01
Local applications of drugs to the inner ear are increasingly being used to treat patients' inner ear disorders. Knowledge of the pharmacokinetics of drugs in the inner ear fluids is essential for a scientific basis for such treatments. When auditory function is of primary interest, the drug's kinetics in scala tympani (ST) must be established. Measurement of drug levels in ST is technically difficult because of the known contamination of perilymph samples taken from the basal cochlear turn with cerebrospinal fluid (CSF). Recently, we reported a technique in which perilymph was sampled from the cochlear apex to minimize the influence of CSF contamination (J. Neurosci. Methods, doi: 10.1016/j.jneumeth.2005.10.008 ). This technique has now been extended by taking smaller fluid samples sequentially from the cochlear apex, which can be used to quantify drug gradients along ST. The sampling and analysis methods were evaluated using an ionic marker, trimethylphenylammonium (TMPA), that was applied to the round window membrane. After loading perilymph with TMPA, 10 1-muL samples were taken from the cochlear apex. The TMPA content of the samples was consistent with the first sample containing perilymph from apical regions and the fourth or fifth sample containing perilymph from the basal turn. TMPA concentration decreased in subsequent samples, as they increasingly contained CSF that had passed through ST. Sample concentration curves were interpreted quantitatively by simulation of the experiment with a finite element model and by an automated curve-fitting method by which the apical-basal gradient was estimated. The study demonstrates that sequential apical sampling provides drug gradient data for ST perilymph while avoiding the major distortions of sample composition associated with basal turn sampling. The method can be used for any substance for which a sensitive assay is available and is therefore of high relevance for the development of preclinical and clinical strategies for local drug delivery to the inner ear.
Mynatt, Robert; Hale, Shane A.; Gill, Ruth M.; Plontke, Stefan K.
2006-01-01
ABSTRACT Local applications of drugs to the inner ear are increasingly being used to treat patients' inner ear disorders. Knowledge of the pharmacokinetics of drugs in the inner ear fluids is essential for a scientific basis for such treatments. When auditory function is of primary interest, the drug's kinetics in scala tympani (ST) must be established. Measurement of drug levels in ST is technically difficult because of the known contamination of perilymph samples taken from the basal cochlear turn with cerebrospinal fluid (CSF). Recently, we reported a technique in which perilymph was sampled from the cochlear apex to minimize the influence of CSF contamination (J. Neurosci. Methods, doi: http://10.1016/j.jneumeth.2005.10.008). This technique has now been extended by taking smaller fluid samples sequentially from the cochlear apex, which can be used to quantify drug gradients along ST. The sampling and analysis methods were evaluated using an ionic marker, trimethylphenylammonium (TMPA), that was applied to the round window membrane. After loading perilymph with TMPA, 10 1-μL samples were taken from the cochlear apex. The TMPA content of the samples was consistent with the first sample containing perilymph from apical regions and the fourth or fifth sample containing perilymph from the basal turn. TMPA concentration decreased in subsequent samples, as they increasingly contained CSF that had passed through ST. Sample concentration curves were interpreted quantitatively by simulation of the experiment with a finite element model and by an automated curve-fitting method by which the apical–basal gradient was estimated. The study demonstrates that sequential apical sampling provides drug gradient data for ST perilymph while avoiding the major distortions of sample composition associated with basal turn sampling. The method can be used for any substance for which a sensitive assay is available and is therefore of high relevance for the development of preclinical and clinical strategies for local drug delivery to the inner ear. PMID:16718612
ISSUES RELATED TO SOLUTION CHEMISTRY IN MERCURY SAMPLING IMPINGERS
Analysis of mercury (Hg) speciation in combustion flue gases is often accomplished in standardized sampling trains in which the sample is passed sequentially through a series of aqueous solutions to capture and separate oxidized Hg (Hg2+) and elemental Hg (Hgo). Such methods incl...
Multilevel Mixture Kalman Filter
NASA Astrophysics Data System (ADS)
Guo, Dong; Wang, Xiaodong; Chen, Rong
2004-12-01
The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.
Multi-arm group sequential designs with a simultaneous stopping rule.
Urach, S; Posch, M
2016-12-30
Multi-arm group sequential clinical trials are efficient designs to compare multiple treatments to a control. They allow one to test for treatment effects already in interim analyses and can have a lower average sample number than fixed sample designs. Their operating characteristics depend on the stopping rule: We consider simultaneous stopping, where the whole trial is stopped as soon as for any of the arms the null hypothesis of no treatment effect can be rejected, and separate stopping, where only recruitment to arms for which a significant treatment effect could be demonstrated is stopped, but the other arms are continued. For both stopping rules, the family-wise error rate can be controlled by the closed testing procedure applied to group sequential tests of intersection and elementary hypotheses. The group sequential boundaries for the separate stopping rule also control the family-wise error rate if the simultaneous stopping rule is applied. However, we show that for the simultaneous stopping rule, one can apply improved, less conservative stopping boundaries for local tests of elementary hypotheses. We derive corresponding improved Pocock and O'Brien type boundaries as well as optimized boundaries to maximize the power or average sample number and investigate the operating characteristics and small sample properties of the resulting designs. To control the power to reject at least one null hypothesis, the simultaneous stopping rule requires a lower average sample number than the separate stopping rule. This comes at the cost of a lower power to reject all null hypotheses. Some of this loss in power can be regained by applying the improved stopping boundaries for the simultaneous stopping rule. The procedures are illustrated with clinical trials in systemic sclerosis and narcolepsy. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Piatak, N.M.; Seal, R.R.; Sanzolone, R.F.; Lamothe, P.J.; Brown, Z.A.
2006-01-01
We report the preliminary results of sequential partial dissolutions used to characterize the geochemical distribution of selenium in stream sediments, mine wastes, and flotation-mill tailings. In general, extraction schemes are designed to extract metals associated with operationally defined solid phases. Total Se concentrations and the mineralogy of the samples are also presented. Samples were obtained from the Elizabeth, Ely, and Pike Hill mines in Vermont, the Callahan mine in Maine, and the Martha mine in New Zealand. These data are presented here with minimal interpretation or discussion. Further analysis of the data will be presented elsewhere.
Stefan-van Staden, Raluca-Ioana; Bokretsion, Rahel Girmai; van Staden, Jacobus F; Aboul-Enein, Hassan Y
2006-01-01
Carbon paste based biosensors for the determination of creatine and creatinine have been integrated into a sequential injection system. Applying the multi-enzyme sequence of creatininase (CA), and/or creatinase (CI) and sarcosine oxidase (SO), hydrogen peroxide has been detected amperometrically. The linear concentration ranges are of pmol/L to nmol/L magnitude, with very low limits of detection. The proposed SIA system can be utilized reliably for the on-line simultaneous detection of creatine and creatinine in pharmaceutical products, as well as in serum samples, with a rate of 34 samples per hour and RSD values better than 0.16% (n=10).
Sequential Sampling Models in Cognitive Neuroscience: Advantages, Applications, and Extensions.
Forstmann, B U; Ratcliff, R; Wagenmakers, E-J
2016-01-01
Sequential sampling models assume that people make speeded decisions by gradually accumulating noisy information until a threshold of evidence is reached. In cognitive science, one such model--the diffusion decision model--is now regularly used to decompose task performance into underlying processes such as the quality of information processing, response caution, and a priori bias. In the cognitive neurosciences, the diffusion decision model has recently been adopted as a quantitative tool to study the neural basis of decision making under time pressure. We present a selective overview of several recent applications and extensions of the diffusion decision model in the cognitive neurosciences.
Radiation detection method and system using the sequential probability ratio test
Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA
2007-07-17
A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.
Silverman, Rachel K; Ivanova, Anastasia
2017-01-01
Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.
Che, W W; Frey, H Christopher; Lau, Alexis K H
2016-08-16
A sequential measurement method is demonstrated for quantifying the variability in exposure concentration during public transportation. This method was applied in Hong Kong by measuring PM2.5 and CO concentrations along a route connecting 13 transportation-related microenvironments within 3-4 h. The study design takes into account ventilation, proximity to local sources, area-wide air quality, and meteorological conditions. Portable instruments were compacted into a backpack to facilitate measurement under crowded transportation conditions and to quantify personal exposure by sampling at nose level. The route included stops next to three roadside monitors to enable comparison of fixed site and exposure concentrations. PM2.5 exposure concentrations were correlated with the roadside monitors, despite differences in averaging time, detection method, and sampling location. Although highly correlated in temporal trend, PM2.5 concentrations varied significantly among microenvironments, with mean concentration ratios versus roadside monitor ranging from 0.5 for MTR train to 1.3 for bus terminal. Measured inter-run variability provides insight regarding the sample size needed to discriminate between microenvironments with increased statistical significance. The study results illustrate the utility of sequential measurement of microenvironments and policy-relevant insights for exposure mitigation and management.
Santra, Kalyan; Smith, Emily A.; Petrich, Jacob W.; ...
2016-12-12
It is often convenient to know the minimum amount of data needed in order to obtain a result of desired accuracy and precision. It is a necessity in the case of subdiffraction-limited microscopies, such as stimulated emission depletion (STED) microscopy, owing to the limited sample volumes and the extreme sensitivity of the samples to photobleaching and photodamage. We present a detailed comparison of probability-based techniques (the maximum likelihood method and methods based on the binomial and the Poisson distributions) with residual minimization-based techniques for retrieving the fluorescence decay parameters for various two-fluorophore mixtures, as a function of the total numbermore » of photon counts, in time-correlated, single-photon counting experiments. The probability-based techniques proved to be the most robust (insensitive to initial values) in retrieving the target parameters and, in fact, performed equivalently to 2-3 significant figures. This is to be expected, as we demonstrate that the three methods are fundamentally related. Furthermore, methods based on the Poisson and binomial distributions have the desirable feature of providing a bin-by-bin analysis of a single fluorescence decay trace, which thus permits statistics to be acquired using only the one trace for not only the mean and median values of the fluorescence decay parameters but also for the associated standard deviations. Lastly, these probability-based methods lend themselves well to the analysis of the sparse data sets that are encountered in subdiffraction-limited microscopies.« less
Higher moments of net-proton multiplicity distributions in a heavy-ion event pile-up scenario
NASA Astrophysics Data System (ADS)
Garg, P.; Mishra, D. K.
2017-10-01
High-luminosity modern accelerators, like the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory (BNL) and Large Hadron Collider (LHC) at European Organization for Nuclear Research (CERN), inherently have event pile-up scenarios which significantly contribute to physics events as a background. While state-of-the-art tracking algorithms and detector concepts take care of these event pile-up scenarios, several offline analytical techniques are used to remove such events from the physics analysis. It is still difficult to identify the remaining pile-up events in an event sample for physics analysis. Since the fraction of these events is significantly small, it may not be as serious of an issue for other analyses as it would be for an event-by-event analysis. Particularly when the characteristics of the multiplicity distribution are observable, one needs to be very careful. In the present work, we demonstrate how a small fraction of residual pile-up events can change the moments and their ratios of an event-by-event net-proton multiplicity distribution, which are sensitive to the dynamical fluctuations due to the QCD critical point. For this study, we assume that the individual event-by-event proton and antiproton multiplicity distributions follow Poisson, negative binomial, or binomial distributions. We observe a significant effect in cumulants and their ratios of net-proton multiplicity distributions due to pile-up events, particularly at lower energies. It might be crucial to estimate the fraction of pile-up events in the data sample while interpreting the experimental observable for the critical point.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser
Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech; ...
2015-11-27
In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less
Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech
In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less
Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.
Bonawitz, Elizabeth; Denison, Stephanie; Gopnik, Alison; Griffiths, Thomas L
2014-11-01
People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a "mini-microgenetic method", investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments. Copyright © 2014 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Ronnlund, Michael; Nilsson, Lars-Goran
2008-01-01
To estimate Flynn effects (FEs) on forms of declarative memory (episodic, semantic) and visuospatial ability (Block Design) time-sequential analyses of data for Swedish adult samples (35-80 years) assessed on either of four occasions (1989, 1994, 1999, 2004; n = 2995) were conducted. The results demonstrated cognitive gains across occasions,…
Algorithms for Large-Scale Astronomical Problems
2013-08-01
implemented as a succession of Hadoop MapReduce jobs and sequential programs written in Java . The sampling and splitting stages are implemented as...one MapReduce job, the partitioning and clustering phases make up another job. The merging stage is implemented as a stand-alone Java program. The...Merging. The merging stage is implemented as a sequential Java program that reads the files with the shell information, which were generated by
Random sequential adsorption of cubes
NASA Astrophysics Data System (ADS)
Cieśla, Michał; Kubala, Piotr
2018-01-01
Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.
What Do Lead and Copper Sampling Protocols Mean, and Which Is Right for You?
this presentation will provide a short review of the explicit and implicit concepts behind most of the currently-used regulatory and diagnostic sampling schemes for lead, such as: random daytime sampling; automated proportional sampler; 30 minute first draw stagnation; Sequential...
Lee, M H; Ahn, H J; Park, J H; Park, Y J; Song, K
2011-02-01
This paper presents a quantitative and rapid method of sequential separation of Pu, (90)Sr and (241)Am nuclides in environmental soil samples with an anion exchange resin and Sr Spec resin. After the sample solution was passed through an anion exchange column connected to a Sr Spec column, Pu isotopes were purified from the anion exchange column. Strontium-90 was separated from other interfering elements by the Sr Spec column. Americium-241 was purified from lanthanides by the anion exchange resin after oxalate co-precipitation. Measurement of Pu and Am isotopes was carried out using an α-spectrometer. Strontium-90 was measured by a low-level liquid scintillation counter. The radiochemical procedure of Pu, (90)Sr and (241)Am nuclides investigated in this study validated by application to IAEA reference materials and environmental soil samples. Copyright © 2010 Elsevier Ltd. All rights reserved.
Poisson and negative binomial item count techniques for surveys with sensitive question.
Tian, Guo-Liang; Tang, Man-Lai; Wu, Qin; Liu, Yin
2017-04-01
Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents' privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.
Silva, Guilherme Resende da; Menezes, Liliane Denize Miranda; Lanza, Isabela Pereira; Oliveira, Daniela Duarte de; Silva, Carla Aparecida; Klein, Roger Wilker Tavares; Assis, Débora Cristina Sampaio de; Cançado, Silvana de Vasconcelos
2017-09-01
In order to evaluate the efficiency of the pasteurization process in liquid whole eggs, an UV/visible spectrophotometric method was developed and validated for the assessment of alpha-amylase activity. Samples were collected from 30 lots of raw eggs (n = 30) and divided into three groups: one was reserved for analysis of the raw eggs, the second group was pasteurized at 61.1°C for 3.5 minutes (n = 30), and the third group was pasteurized at 64.4°C for 2.5 minutes (n = 30). In addition to assessing alpha-amylase activity, the microbiological quality of the samples was also evaluated by counting total and thermotolerant coliforms, mesophilic aerobic microorganisms, Staphylococcus spp., and Salmonella spp. The validated spectrophotometric method demonstrated linearity, with a coefficient of determination (R2) greater than 0.99, limits of detection (LOD) and quantification (LOQ) of 0.48 mg kg-1 and 1.16 mg kg-1, respectively, and acceptable precision and accuracy with relative standard deviation (RSD) values of less than 10% and recovery rates between 98.81% and 105.40%. The results for alpha-amylase activity in the raw egg samples showed high enzyme activity due to near-complete hydrolysis of the starch, while in the eggs pasteurized at 61.1°C, partial inactivation of the enzyme was observed. In the samples of whole eggs pasteurized at 64.4°C, starch hydrolysis did not occur due to enzyme inactivation. The results of the microbiological analyses showed a decrease (P < 0.0001) in the counts for all the studied microorganisms and in the frequency of Salmonella spp. in the pasteurized egg samples according to the two binomials under investigation, compared to the raw egg samples, which showed high rates of contamination (P < 0.0001). After pasteurization, only one sample (3.33%) was positive for Salmonella spp., indicating failure in the pasteurization process, which was confirmed by the alpha-amylase test. It was concluded that the validated methodology for testing alpha-amylase activity is adequate for assessing the efficiency of the pasteurization process, and that the time-temperature binomial used in this study is suitable to produce pasteurized eggs with high microbiological quality. © 2017 Poultry Science Association Inc.
Pérez Cid, B; Fernández Alborés, A; Fernández Gómez, E; Faliqé López, E
2001-08-01
The conventional three-stage BCR sequential extraction method was employed for the fractionation of heavy metals in sewage sludge samples from an urban wastewater treatment plant and from an olive oil factory. The results obtained for Cu, Cr, Ni, Pb and Zn in these samples were compared with those attained by a simplified extraction procedure based on microwave single extractions and using the same reagents as employed in each individual BCR fraction. The microwave operating conditions in the single extractions (heating time and power) were optimized for all the metals studied in order to achieve an extraction efficiency similar to that of the conventional BCR procedure. The measurement of metals in the extracts was carried out by flame atomic absorption spectrometry. The results obtained in the first and third fractions by the proposed procedure were, for all metals, in good agreement with those obtained using the BCR sequential method. Although in the reducible fraction the extraction efficiency of the accelerated procedure was inferior to that of the conventional method, the overall metals leached by both microwave single and sequential extractions were basically the same (recoveries between 90.09 and 103.7%), except for Zn in urban sewage sludges where an extraction efficiency of 87% was achieved. Chemometric analysis showed a good correlation between the results given by the two extraction methodologies compared. The application of the proposed approach to a certified reference material (CRM-601) also provided satisfactory results in the first and third fractions, as it was observed for the sludge samples analysed.
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
Electromagnetic-induction logging to monitor changing chloride concentrations
Metzger, Loren F.; Izbicki, John A.
2013-01-01
Water from the San Joaquin Delta, having chloride concentrations up to 3590 mg/L, has intruded fresh water aquifers underlying Stockton, California. Changes in chloride concentrations at depth within these aquifers were evaluated using sequential electromagnetic (EM) induction logs collected during 2004 through 2007 at seven multiple-well sites as deep as 268 m. Sequential EM logging is useful for identifying changes in groundwater quality through polyvinyl chloride-cased wells in intervals not screened by wells. These unscreened intervals represent more than 90% of the aquifer at the sites studied. Sequential EM logging suggested degrading groundwater quality in numerous thin intervals, typically between 1 and 7 m in thickness, especially in the northern part of the study area. Some of these intervals were unscreened by wells, and would not have been identified by traditional groundwater sample collection. Sequential logging also identified intervals with improving water quality—possibly due to groundwater management practices that have limited pumping and promoted artificial recharge. EM resistivity was correlated with chloride concentrations in sampled wells and in water from core material. Natural gamma log data were used to account for the effect of aquifer lithology on EM resistivity. Results of this study show that a sequential EM logging is useful for identifying and monitoring the movement of high-chloride water, having lower salinities and chloride concentrations than sea water, in aquifer intervals not screened by wells, and that increases in chloride in water from wells in the area are consistent with high-chloride water originating from the San Joaquin Delta rather than from the underlying saline aquifer.
Oral health of schoolchildren in Western Australia.
Arrow, P
2016-09-01
The West Australian School Dental Service (SDS) provides free, statewide, primary dental care to schoolchildren aged 5-17 years. This study reports on an evaluation of the oral health of children examined during the 2014 calendar year. Children were sampled, based on their date of birth, and SDS clinicians collected the clinical information. Weighted mean values of caries experience were presented. Negative binomial regression modelling was undertaken to test for factors of significance in the rate of caries occurrence. Data from children aged 5-15 years were used (girls = 4616, boys = 4900). Mean dmft (5-10-year-olds), 1.42 SE 0.03; mean DMFT (6-15-year-olds), 0.51 SE 0.01. Negative binomial regression model of permanent tooth caries found higher rates of caries in children who were from non-fluoridated areas (RR 2.1); Aboriginal (RR 2.4); had gingival inflammation (RR 1.5); lower ICSEA level (RR 1.4); and recalled at more than 24-month interval (RR 1.8). The study highlighted poor dental health associated with living in non-fluoridated areas, Aboriginal identity, poor oral hygiene, lower socioeconomic level and having extended intervals between dental checkups. Timely assessments and preventive measures targeted at groups, including extending community water fluoridation, may assist in further improving the oral health of children in Western Australia. © 2015 Australian Dental Association.
Rocheleau, J P; Michel, P; Lindsay, L R; Drebot, M; Dibernardo, A; Ogden, N H; Fortin, A; Arsenault, J
2017-10-01
The identification of specific environments sustaining emerging arbovirus amplification and transmission to humans is a key component of public health intervention planning. This study aimed at identifying environmental factors associated with West Nile virus (WNV) infections in southern Quebec, Canada, by modelling and jointly interpreting aggregated clinical data in humans and serological data in pet dogs. Environmental risk factors were estimated in humans by negative binomial regression based on a dataset of 191 human WNV clinical cases reported in the study area between 2011 and 2014. Risk factors for infection in dogs were evaluated by logistic and negative binomial models based on a dataset including WNV serological results from 1442 dogs sampled from the same geographical area in 2013. Forested lands were identified as low-risk environments in humans. Agricultural lands represented higher risk environments for dogs. Environments identified as impacting risk in the current study were somewhat different from those identified in other studies conducted in north-eastern USA, which reported higher risk in suburban environments. In the context of the current study, combining human and animal data allowed a more comprehensive and possibly a more accurate view of environmental WNV risk factors to be obtained than by studying aggregated human data alone.
Negative Binomial Process Count and Mixture Modeling.
Zhou, Mingyuan; Carin, Lawrence
2015-02-01
The seemingly disjoint problems of count and mixture modeling are united under the negative binomial (NB) process. A gamma process is employed to model the rate measure of a Poisson process, whose normalization provides a random probability measure for mixture modeling and whose marginalization leads to an NB process for count modeling. A draw from the NB process consists of a Poisson distributed finite number of distinct atoms, each of which is associated with a logarithmic distributed number of data samples. We reveal relationships between various count- and mixture-modeling distributions and construct a Poisson-logarithmic bivariate distribution that connects the NB and Chinese restaurant table distributions. Fundamental properties of the models are developed, and we derive efficient Bayesian inference. It is shown that with augmentation and normalization, the NB process and gamma-NB process can be reduced to the Dirichlet process and hierarchical Dirichlet process, respectively. These relationships highlight theoretical, structural, and computational advantages of the NB process. A variety of NB processes, including the beta-geometric, beta-NB, marked-beta-NB, marked-gamma-NB and zero-inflated-NB processes, with distinct sharing mechanisms, are also constructed. These models are applied to topic modeling, with connections made to existing algorithms under Poisson factor analysis. Example results show the importance of inferring both the NB dispersion and probability parameters.
Goodness-of-fit tests and model diagnostics for negative binomial regression of RNA sequencing data.
Mi, Gu; Di, Yanming; Schafer, Daniel W
2015-01-01
This work is about assessing model adequacy for negative binomial (NB) regression, particularly (1) assessing the adequacy of the NB assumption, and (2) assessing the appropriateness of models for NB dispersion parameters. Tools for the first are appropriate for NB regression generally; those for the second are primarily intended for RNA sequencing (RNA-Seq) data analysis. The typically small number of biological samples and large number of genes in RNA-Seq analysis motivate us to address the trade-offs between robustness and statistical power using NB regression models. One widely-used power-saving strategy, for example, is to assume some commonalities of NB dispersion parameters across genes via simple models relating them to mean expression rates, and many such models have been proposed. As RNA-Seq analysis is becoming ever more popular, it is appropriate to make more thorough investigations into power and robustness of the resulting methods, and into practical tools for model assessment. In this article, we propose simulation-based statistical tests and diagnostic graphics to address model adequacy. We provide simulated and real data examples to illustrate that our proposed methods are effective for detecting the misspecification of the NB mean-variance relationship as well as judging the adequacy of fit of several NB dispersion models.
Austin, Shamly; Qu, Haiyan; Shewchuk, Richard M
2012-10-01
To examine the association between adherence to physical activity guidelines and health-related quality of life (HRQOL) among individuals with arthritis. A cross-sectional sample with 33,071 US adults, 45 years or older with physician-diagnosed arthritis was obtained from 2007 Behavioral Risk Factor Surveillance System survey. We conducted negative binomial regression analysis to examine HRQOL as a function of adherence to physical activity guidelines controlling for physicians' recommendations for physical activity, age, sex, race, education, marital status, employment, annual income, health insurance, personal physician, emotional support, body mass index, activity limitations, health status, and co-morbidities based on Behavioral Model of Health Services Utilization. Descriptive statistics showed that 60% adults with arthritis did not adhere to physical activity guidelines, mean physically and mentally unhealthy days were 7.7 and 4.4 days, respectively. Results from negative binomial regression indicated that individuals who did not adhere to physical activity guidelines had 1.14 days more physically unhealthy days and 1.12 days more mentally unhealthy days than those who adhered controlling for covariates. Adherence to physical activity is important to improve HRQOL for individuals with arthritis. However, adherence is low among this population. Interventions are required to engage individuals with arthritis in physical activity.
Technical and biological variance structure in mRNA-Seq data: life in the real world
2012-01-01
Background mRNA expression data from next generation sequencing platforms is obtained in the form of counts per gene or exon. Counts have classically been assumed to follow a Poisson distribution in which the variance is equal to the mean. The Negative Binomial distribution which allows for over-dispersion, i.e., for the variance to be greater than the mean, is commonly used to model count data as well. Results In mRNA-Seq data from 25 subjects, we found technical variation to generally follow a Poisson distribution as has been reported previously and biological variability was over-dispersed relative to the Poisson model. The mean-variance relationship across all genes was quadratic, in keeping with a Negative Binomial (NB) distribution. Over-dispersed Poisson and NB distributional assumptions demonstrated marked improvements in goodness-of-fit (GOF) over the standard Poisson model assumptions, but with evidence of over-fitting in some genes. Modeling of experimental effects improved GOF for high variance genes but increased the over-fitting problem. Conclusions These conclusions will guide development of analytical strategies for accurate modeling of variance structure in these data and sample size determination which in turn will aid in the identification of true biological signals that inform our understanding of biological systems. PMID:22769017
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
Technical Reports Prepared Under Contract N00014-76-C-0475.
1987-05-29
264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.
2006-02-14
Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically positioned near the sample cells. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
Hansson, Mari; Pemberton, John; Engkvist, Ola; Feierberg, Isabella; Brive, Lars; Jarvis, Philip; Zander-Balderud, Linda; Chen, Hongming
2014-06-01
High-throughput screening (HTS) is widely used in the pharmaceutical industry to identify novel chemical starting points for drug discovery projects. The current study focuses on the relationship between molecular hit rate in recent in-house HTS and four common molecular descriptors: lipophilicity (ClogP), size (heavy atom count, HEV), fraction of sp(3)-hybridized carbons (Fsp3), and fraction of molecular framework (f(MF)). The molecular hit rate is defined as the fraction of times the molecule has been assigned as active in the HTS campaigns where it has been screened. Beta-binomial statistical models were built to model the molecular hit rate as a function of these descriptors. The advantage of the beta-binomial statistical models is that the correlation between the descriptors is taken into account. Higher degree polynomial terms of the descriptors were also added into the beta-binomial statistic model to improve the model quality. The relative influence of different molecular descriptors on molecular hit rate has been estimated, taking into account that the descriptors are correlated to each other through applying beta-binomial statistical modeling. The results show that ClogP has the largest influence on the molecular hit rate, followed by Fsp3 and HEV. f(MF) has only a minor influence besides its correlation with the other molecular descriptors. © 2013 Society for Laboratory Automation and Screening.
Data mining of tree-based models to analyze freeway accident frequency.
Chang, Li-Yen; Chen, Wen-Chieh
2005-01-01
Statistical models, such as Poisson or negative binomial regression models, have been employed to analyze vehicle accident frequency for many years. However, these models have their own model assumptions and pre-defined underlying relationship between dependent and independent variables. If these assumptions are violated, the model could lead to erroneous estimation of accident likelihood. Classification and Regression Tree (CART), one of the most widely applied data mining techniques, has been commonly employed in business administration, industry, and engineering. CART does not require any pre-defined underlying relationship between target (dependent) variable and predictors (independent variables) and has been shown to be a powerful tool, particularly for dealing with prediction and classification problems. This study collected the 2001-2002 accident data of National Freeway 1 in Taiwan. A CART model and a negative binomial regression model were developed to establish the empirical relationship between traffic accidents and highway geometric variables, traffic characteristics, and environmental factors. The CART findings indicated that the average daily traffic volume and precipitation variables were the key determinants for freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies. By comparing the prediction performance between the CART and the negative binomial regression models, this study demonstrates that CART is a good alternative method for analyzing freeway accident frequencies.
Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger
2017-09-01
The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).
Mketo, Nomvano; Nomngongo, Philiswa N; Ngila, J Catherine
2018-05-15
A rapid three-step sequential extraction method was developed under microwave radiation followed by inductively coupled plasma-optical emission spectroscopic (ICP-OES) and ion-chromatographic (IC) analysis for the determination of sulphur forms in coal samples. The experimental conditions of the proposed microwave-assisted sequential extraction (MW-ASE) procedure were optimized by using multivariate mathematical tools. Pareto charts generated from 2 3 full factorial design showed that, extraction time has insignificant effect on the extraction of sulphur species, therefore, all the sequential extraction steps were performed for 5 min. The optimum values according to the central composite designs and counter plots of the response surface methodology were 200 °C (microwave temperature) and 0.1 g (coal amount) for all the investigated extracting reagents (H 2 O, HCl and HNO 3 ). When the optimum conditions of the proposed MW-ASE procedure were applied in coal CRMs, SARM 18 showed more organic sulphur (72%) and the other two coal CRMs (SARMs 19 and 20) were dominated by sulphide sulphur species (52-58%). The sum of the sulphur forms from the sequential extraction steps have shown consistent agreement (95-96%) with certified total sulphur values on the coal CRM certificates. This correlation, in addition to the good precision (1.7%) achieved by the proposed procedure, suggests that the sequential extraction method is reliable, accurate and reproducible. To safe-guard the destruction of pyritic and organic sulphur forms in extraction step 1, water was used instead of HCl. Additionally, the notorious acidic mixture (HCl/HNO 3 /HF) was replaced by greener reagent (H 2 O 2 ) in the last extraction step. Therefore, the proposed MW-ASE method can be applied in routine laboratories for the determination of sulphur forms in coal and coal related matrices. Copyright © 2018 Elsevier B.V. All rights reserved.
2013-01-01
Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725
Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello
2013-10-26
Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.
Veiga, Helena Perrut; Bianchini, Esther Mandelbaum Gonçalves
2012-01-01
To perform an integrative review of studies on liquid sequential swallowing, by characterizing the methodology of the studies and the most important findings in young and elderly adults. Review of the literature written in English and Portuguese on PubMed, LILACS, SciELO and MEDLINE databases, within the past twenty years, available fully, using the following uniterms: sequential swallowing, swallowing, dysphagia, cup, straw, in various combinations. Research articles with a methodological approach on the characterization of liquid sequential swallowing by young and/or elderly adults, regardless of health condition, excluding studies involving only the esophageal phase. The following research indicators were applied: objectives, number and gender of participants; age group; amount of liquid offered; intake instruction; utensil used, methods and main findings. 18 studies met the established criteria. The articles were categorized according to the sample characterization and the methodology on volume intake, utensil used and types of exams. Most studies investigated only healthy individuals, with no swallowing complaints. Subjects were given different instructions as to the intake of all the volume: usual manner, continually, as rapidly as possible. The findings about the characterization of sequential swallowing were varied and described in accordance with the objectives of each study. It found great variability in the methodology employed to characterize the sequential swallowing. Some findings are not comparable, and sequential swallowing is not studied in most swallowing protocols, without consensus on the influence of the utensil.
Wilkes, E J A; Cowling, A; Woodgate, R G; Hughes, K J
2016-10-15
Faecal egg counts (FEC) are used widely for monitoring of parasite infection in animals, treatment decision-making and estimation of anthelmintic efficacy. When a single count or sample mean is used as a point estimate of the expectation of the egg distribution over some time interval, the variability in the egg density is not accounted for. Although variability, including quantifying sources, of egg count data has been described, the spatiotemporal distribution of nematode eggs in faeces is not well understood. We believe that statistical inference about the mean egg count for treatment decision-making has not been used previously. The aim of this study was to examine the density of Parascaris eggs in solution and faeces and to describe the use of hypothesis testing for decision-making. Faeces from two foals with Parascaris burdens were mixed with magnesium sulphate solution and 30 McMaster chambers were examined to determine the egg distribution in a well-mixed solution. To examine the distribution of eggs in faeces from an individual animal, three faecal piles from a foal with a known Parascaris burden were obtained, from which 81 counts were performed. A single faecal sample was also collected daily from 20 foals on three consecutive days and a FEC was performed on three separate portions of each sample. As appropriate, Poisson or negative binomial confidence intervals for the distribution mean were calculated. Parascaris eggs in a well-mixed solution conformed to a homogeneous Poisson process, while the egg density in faeces was not homogeneous, but aggregated. This study provides an extension from homogeneous to inhomogeneous Poisson processes, leading to an understanding of why Poisson and negative binomial distributions correspondingly provide a good fit for egg count data. The application of one-sided hypothesis tests for decision-making is presented. Copyright © 2016 Elsevier B.V. All rights reserved.
Bennett, Bradley C; Balick, Michael J
2014-03-28
Medical research on plant-derived compounds requires a breadth of expertise from field to laboratory and clinical skills. Too often basic botanical skills are evidently lacking, especially with respect to plant taxonomy and botanical nomenclature. Binomial and familial names, synonyms and author citations are often misconstrued. The correct botanical name, linked to a vouchered specimen, is the sine qua non of phytomedical research. Without the unique identifier of a proper binomial, research cannot accurately be linked to the existing literature. Perhaps more significant, is the ambiguity of species determinations that ensues of from poor taxonomic practices. This uncertainty, not surprisingly, obstructs reproducibility of results-the cornerstone of science. Based on our combined six decades of experience with medicinal plants, we discuss the problems of inaccurate taxonomy and botanical nomenclature in biomedical research. This problems appear all too frequently in manuscripts and grant applications that we review and they extend to the published literature. We also review the literature on the importance of taxonomy in other disciplines that relate to medicinal plant research. In most cases, questions regarding orthography, synonymy, author citations, and current family designations of most plant binomials can be resolved using widely-available online databases and other electronic resources. Some complex problems require consultation with a professional plant taxonomist, which also is important for accurate identification of voucher specimens. Researchers should provide the currently accepted binomial and complete author citation, provide relevant synonyms, and employ the Angiosperm Phylogeny Group III family name. Taxonomy is a vital adjunct not only to plant-medicine research but to virtually every field of science. Medicinal plant researchers can increase the precision and utility of their investigations by following sound practices with respect to botanical nomenclature. Correct spellings, accepted binomials, author citations, synonyms, and current family designations can readily be found on reliable online databases. When questions arise, researcher should consult plant taxonomists. © 2013 Published by Elsevier Ireland Ltd.
Hosseinpour, Mehdi; Yahaya, Ahmad Shukri; Sadullah, Ahmad Farhan
2014-01-01
Head-on crashes are among the most severe collision types and of great concern to road safety authorities. Therefore, it justifies more efforts to reduce both the frequency and severity of this collision type. To this end, it is necessary to first identify factors associating with the crash occurrence. This can be done by developing crash prediction models that relate crash outcomes to a set of contributing factors. This study intends to identify the factors affecting both the frequency and severity of head-on crashes that occurred on 448 segments of five federal roads in Malaysia. Data on road characteristics and crash history were collected on the study segments during a 4-year period between 2007 and 2010. The frequency of head-on crashes were fitted by developing and comparing seven count-data models including Poisson, standard negative binomial (NB), random-effect negative binomial, hurdle Poisson, hurdle negative binomial, zero-inflated Poisson, and zero-inflated negative binomial models. To model crash severity, a random-effect generalized ordered probit model (REGOPM) was used given a head-on crash had occurred. With respect to the crash frequency, the random-effect negative binomial (RENB) model was found to outperform the other models according to goodness of fit measures. Based on the results of the model, the variables horizontal curvature, terrain type, heavy-vehicle traffic, and access points were found to be positively related to the frequency of head-on crashes, while posted speed limit and shoulder width decreased the crash frequency. With regard to the crash severity, the results of REGOPM showed that horizontal curvature, paved shoulder width, terrain type, and side friction were associated with more severe crashes, whereas land use, access points, and presence of median reduced the probability of severe crashes. Based on the results of this study, some potential countermeasures were proposed to minimize the risk of head-on crashes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Terrill, Thomas H; Wolfe, Richard M; Muir, James P
2010-12-01
Browse species containing condensed tannins (CTs) are an important source of nutrition for grazing/browsing livestock and wildlife in many parts of the world, but information on fiber concentration and CT-fiber interactions for these plants is lacking. Ten forage or browse species with a range of CT concentrations were oven dried and freeze dried and then analyzed for ash-corrected neutral detergent fiber (NDFom) and corrected acid detergent fiber (ADFom) using separate samples (ADFSEP) and sequential NDF-ADF analysis (ADFSEQ) with the ANKOM™ fiber analysis system. The ADFSEP and ADFSEQ residues were then analyzed for nitrogen (N) concentration. Oven drying increased (P < 0.05) fiber concentrations with some species, but not with others. For high-CT forage and browse species, ADFSEP concentrations were greater (P < 0.05) than NDFom values and approximately double the ADFSEQ values. Nitrogen concentration was greater (P < 0.05) in ADFSEP than ADFSEQ residues, likely due to precipitation with CTs. Sequential NDF-ADF analysis gave more realistic values and appeared to remove most of the fiber residue contaminants in CT forage samples. Freeze drying samples with sequential NDF-ADF analysis is recommended in the ANKOM™ fiber analysis system with CT-containing forage and browse species. Copyright © 2010 Society of Chemical Industry.
Bayesian sample size calculations in phase II clinical trials using a mixture of informative priors.
Gajewski, Byron J; Mayo, Matthew S
2006-08-15
A number of researchers have discussed phase II clinical trials from a Bayesian perspective. A recent article by Mayo and Gajewski focuses on sample size calculations, which they determine by specifying an informative prior distribution and then calculating a posterior probability that the true response will exceed a prespecified target. In this article, we extend these sample size calculations to include a mixture of informative prior distributions. The mixture comes from several sources of information. For example consider information from two (or more) clinicians. The first clinician is pessimistic about the drug and the second clinician is optimistic. We tabulate the results for sample size design using the fact that the simple mixture of Betas is a conjugate family for the Beta- Binomial model. We discuss the theoretical framework for these types of Bayesian designs and show that the Bayesian designs in this paper approximate this theoretical framework. Copyright 2006 John Wiley & Sons, Ltd.
Improved confidence intervals when the sample is counted an integer times longer than the blank.
Potter, William Edward; Strzelczyk, Jadwiga Jodi
2011-05-01
Past computer solutions for confidence intervals in paired counting are extended to the case where the ratio of the sample count time to the blank count time is taken to be an integer, IRR. Previously, confidence intervals have been named Neyman-Pearson confidence intervals; more correctly they should have been named Neyman confidence intervals or simply confidence intervals. The technique utilized mimics a technique used by Pearson and Hartley to tabulate confidence intervals for the expected value of the discrete Poisson and Binomial distributions. The blank count and the contribution of the sample to the gross count are assumed to be Poisson distributed. The expected value of the blank count, in the sample count time, is assumed known. The net count, OC, is taken to be the gross count minus the product of IRR with the blank count. The probability density function (PDF) for the net count can be determined in a straightforward manner.
Sample extraction is one of the most important steps in arsenic speciation analysis of solid dietary samples. One of the problem areas in this analysis is the partial extraction of arsenicals from seafood samples. The partial extraction allows the toxicity of the extracted arse...
Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure
Salehi, M.; Smith, D.R.
2005-01-01
Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.
Manganese speciation of laboratory-generated welding fumes
Andrews, Ronnee N.; Keane, Michael; Hanley, Kevin W.; Feng, H. Amy; Ashley, Kevin
2015-01-01
The objective of this laboratory study was to identify and measure manganese (Mn) fractions in chamber-generated welding fumes (WF) and to evaluate and compare the results from a sequential extraction procedure for Mn fractions with that of an acid digestion procedure for measurement of total, elemental Mn. To prepare Mn-containing particulate matter from representative welding processes, a welding system was operated in short circuit gas metal arc welding (GMAW) mode using both stainless steel (SS) and mild carbon steel (MCS) and also with flux cored arc welding (FCAW) and shielded metal arc welding (SMAW) using MCS. Generated WF samples were collected onto polycarbonate filters before homogenization, weighing and storage in scintillation vials. The extraction procedure consisted of four sequential steps to measure various Mn fractions based upon selective solubility: (1) soluble Mn dissolved in 0.01 M ammonium acetate; (2) Mn (0,II) dissolved in 25 % (v/v) acetic acid; (3) Mn (III,IV) dissolved in 0.5% (w/v) hydroxylamine hydrochloride in 25% (v/v) acetic acid; and (4) insoluble Mn extracted with concentrated hydrochloric and nitric acids. After sample treatment, the four fractions were analyzed for Mn by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). WF from GMAW and FCAW showed similar distributions of Mn species, with the largest concentrations of Mn detected in the Mn (0,II) and insoluble Mn fractions. On the other hand, the majority of the Mn content of SMAW fume was detected as Mn (III,IV). Although the concentration of Mn measured from summation of the four sequential steps was statistically significantly different from that measured from the hot block dissolution method for total Mn, the difference is small enough to be of no practical importance for industrial hygiene air samples, and either method may be used for Mn measurement. The sequential extraction method provides valuable information about the oxidation state of Mn in samples and allows for comparison to results from previous work and from total Mn dissolution methods. PMID:26345630
Manganese speciation of laboratory-generated welding fumes.
Andrews, Ronnee N; Keane, Michael; Hanley, Kevin W; Feng, H Amy; Ashley, Kevin
The objective of this laboratory study was to identify and measure manganese (Mn) fractions in chamber-generated welding fumes (WF) and to evaluate and compare the results from a sequential extraction procedure for Mn fractions with that of an acid digestion procedure for measurement of total, elemental Mn. To prepare Mn-containing particulate matter from representative welding processes, a welding system was operated in short circuit gas metal arc welding (GMAW) mode using both stainless steel (SS) and mild carbon steel (MCS) and also with flux cored arc welding (FCAW) and shielded metal arc welding (SMAW) using MCS. Generated WF samples were collected onto polycarbonate filters before homogenization, weighing and storage in scintillation vials. The extraction procedure consisted of four sequential steps to measure various Mn fractions based upon selective solubility: (1) soluble Mn dissolved in 0.01 M ammonium acetate; (2) Mn (0,II) dissolved in 25 % (v/v) acetic acid; (3) Mn (III,IV) dissolved in 0.5% (w/v) hydroxylamine hydrochloride in 25% (v/v) acetic acid; and (4) insoluble Mn extracted with concentrated hydrochloric and nitric acids. After sample treatment, the four fractions were analyzed for Mn by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). WF from GMAW and FCAW showed similar distributions of Mn species, with the largest concentrations of Mn detected in the Mn (0,II) and insoluble Mn fractions. On the other hand, the majority of the Mn content of SMAW fume was detected as Mn (III,IV). Although the concentration of Mn measured from summation of the four sequential steps was statistically significantly different from that measured from the hot block dissolution method for total Mn, the difference is small enough to be of no practical importance for industrial hygiene air samples, and either method may be used for Mn measurement. The sequential extraction method provides valuable information about the oxidation state of Mn in samples and allows for comparison to results from previous work and from total Mn dissolution methods.
Binomial test statistics using Psi functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Kimiko o
2007-01-01
For the negative binomial model (probability generating function (p + 1 - pt){sup -k}) a logarithmic derivative is the Psi function difference {psi}(k + x) - {psi}(k); this and its derivatives lead to a test statistic to decide on the validity of a specified model. The test statistic uses a data base so there exists a comparison available between theory and application. Note that the test function is not dominated by outliers. Applications to (i) Fisher's tick data, (ii) accidents data, (iii) Weldon's dice data are included.
Lytras, Theodore; Georgakopoulou, Theano; Tsiodras, Sotirios
2018-01-01
Greece is currently experiencing a large measles outbreak, in the context of multiple similar outbreaks across Europe. We devised and applied a modified chain-binomial epidemic model, requiring very simple data, to estimate the transmission parameters of this outbreak. Model results indicate sustained measles transmission among the Greek Roma population, necessitating a targeted mass vaccination campaign to halt further spread of the epidemic. Our model may be useful for other countries facing similar measles outbreaks. PMID:29717695
Lytras, Theodore; Georgakopoulou, Theano; Tsiodras, Sotirios
2018-04-01
Greece is currently experiencing a large measles outbreak, in the context of multiple similar outbreaks across Europe. We devised and applied a modified chain-binomial epidemic model, requiring very simple data, to estimate the transmission parameters of this outbreak. Model results indicate sustained measles transmission among the Greek Roma population, necessitating a targeted mass vaccination campaign to halt further spread of the epidemic. Our model may be useful for other countries facing similar measles outbreaks.
Papini, Paolo; Faustini, Annunziata; Manganello, Rosa; Borzacchi, Giancarlo; Spera, Domenico; Perucci, Carlo A
2005-01-01
To determine the frequency of sampling in small water distribution systems (<5,000 inhabitants) and compare the results according to different hypotheses in bacteria distribution. We carried out two sampling programs to monitor the water distribution system in a town in Central Italy between July and September 1992; the Poisson distribution assumption implied 4 water samples, the assumption of negative binomial distribution implied 21 samples. Coliform organisms were used as indicators of water safety. The network consisted of two pipe rings and two wells fed by the same water source. The number of summer customers varied considerably from 3,000 to 20,000. The mean density was 2.33 coliforms/100 ml (sd= 5.29) for 21 samples and 3 coliforms/100 ml (sd= 6) for four samples. However the hypothesis of homogeneity was rejected (p-value <0.001) and the probability of II type error with the assumption of heterogeneity was higher with 4 samples (beta= 0.24) than with 21 (beta= 0.05). For this small network, determining the samples' size according to heterogeneity hypothesis strengthens the statement that water is drinkable compared with homogeneity assumption.
Amorim, Fábio A C; Ferreira, Sérgio L C
2005-02-28
In the present paper, a simultaneous pre-concentration procedure for the sequential determination of cadmium and lead in table salt samples using flame atomic absorption spectrometry is proposed. This method is based on the liquid-liquid extraction of cadmium(II) and lead(II) ions as dithizone complexes and direct aspiration of the organic phase for the spectrometer. The sequential determination of cadmium and lead is possible using a computer program. The optimization step was performed by a two-level fractional factorial design involving the variables: pH, dithizone mass, shaking time after addition of dithizone and shaking time after addition of solvent. In the studied levels these variables are not significant. The experimental conditions established propose a sample volume of 250mL and the extraction process using 4.0mL of methyl isobutyl ketone. This way, the procedure allows determination of cadmium and lead in table salt samples with a pre-concentration factor higher than 80, and detection limits of 0.3ngg(-1) for cadmium and 4.2ngg(-1) for lead. The precision expressed as relative standard deviation (n = 10) were 5.6 and 2.6% for cadmium concentration of 2 and 20ngg(-1), respectively, and of 3.2 and 1.1% for lead concentration of 20 and 200ngg(-1), respectively. Recoveries of cadmium and lead in several samples, measured by standard addition technique, proved also that this procedure is not affected by the matrix and can be applied satisfactorily for the determination of cadmium and lead in saline samples. The method was applied for the evaluation of the concentration of cadmium and lead in table salt samples consumed in Salvador City, Bahia, Brazil.
Fitts, Douglas A
2017-09-21
The variable criteria sequential stopping rule (vcSSR) is an efficient way to add sample size to planned ANOVA tests while holding the observed rate of Type I errors, α o , constant. The only difference from regular null hypothesis testing is that criteria for stopping the experiment are obtained from a table based on the desired power, rate of Type I errors, and beginning sample size. The vcSSR was developed using between-subjects ANOVAs, but it should work with p values from any type of F test. In the present study, the α o remained constant at the nominal level when using the previously published table of criteria with repeated measures designs with various numbers of treatments per subject, Type I error rates, values of ρ, and four different sample size models. New power curves allow researchers to select the optimal sample size model for a repeated measures experiment. The criteria held α o constant either when used with a multiple correlation that varied the sample size model and the number of predictor variables, or when used with MANOVA with multiple groups and two levels of a within-subject variable at various levels of ρ. Although not recommended for use with χ 2 tests such as the Friedman rank ANOVA test, the vcSSR produces predictable results based on the relation between F and χ 2 . Together, the data confirm the view that the vcSSR can be used to control Type I errors during sequential sampling with any t- or F-statistic rather than being restricted to certain ANOVA designs.
Shakeri Yekta, Sepehr; Gustavsson, Jenny; Svensson, Bo H; Skyllberg, Ulf
2012-01-30
The effect of sequential extraction of trace metals on sulfur (S) speciation in anoxic sludge samples from two lab-scale biogas reactors augmented with Fe was investigated. Analyses of sulfur K-edge X-ray absorption near edge structure (S XANES) spectroscopy and acid volatile sulfide (AVS) were conducted on the residues from each step of the sequential extraction. The S speciation in sludge samples after AVS analysis was also determined by S XANES. Sulfur was mainly present as FeS (≈ 60% of total S) and reduced organic S (≈ 30% of total S), such as organic sulfide and thiol groups, in the anoxic solid phase. Sulfur XANES and AVS analyses showed that during first step of the extraction procedure (the removal of exchangeable cations), a part of the FeS fraction corresponding to 20% of total S was transformed to zero-valent S, whereas Fe was not released into the solution during this transformation. After the last extraction step (organic/sulfide fraction) a secondary Fe phase was formed. The change in chemical speciation of S and Fe occurring during sequential extraction procedure suggests indirect effects on trace metals associated to the FeS fraction that may lead to incorrect results. Furthermore, by S XANES it was verified that the AVS analysis effectively removed the FeS fraction. The present results identified critical limitations for the application of sequential extraction for trace metal speciation analysis outside the framework for which the methods were developed. Copyright © 2011 Elsevier B.V. All rights reserved.
Risk factors related to Toxoplasma gondii seroprevalence in indoor-housed Dutch dairy goats.
Deng, Huifang; Dam-Deisz, Cecile; Luttikholt, Saskia; Maas, Miriam; Nielen, Mirjam; Swart, Arno; Vellema, Piet; van der Giessen, Joke; Opsteegh, Marieke
2016-02-01
Toxoplasma gondii can cause disease in goats, but also has impact on human health through food-borne transmission. Our aims were to determine the seroprevalence of T. gondii infection in indoor-housed Dutch dairy goats and to identify the risk factors related to T. gondii seroprevalence. Fifty-two out of ninety approached farmers with indoor-kept goats (58%) participated by answering a standardized questionnaire and contributing 32 goat blood samples each. Serum samples were tested for T. gondii SAG1 antibodies by ELISA and results showed that the frequency distribution of the log10-transformed OD-values fitted well with a binary mixture of a shifted gamma and a shifted reflected gamma distribution. The overall animal seroprevalence was 13.3% (95% CI: 11.7–14.9%), and at least one seropositive animal was found on 61.5% (95% CI: 48.3–74.7%) of the farms. To evaluate potential risk factors on herd level, three modeling strategies (Poisson, negative binomial and zero-inflated) were compared. The negative binomial model fitted the data best with the number of cats (1–4 cats: IR: 2.6, 95% CI: 1.1–6.5; > = 5 cats:IR: 14.2, 95% CI: 3.9–51.1) and mean animal age (IR: 1.5, 95% CI: 1.1–2.1) related to herd positivity. In conclusion, the ELISA test was 100% sensitive and specific based on binary mixture analysis. T. gondii infection is prevalent in indoor housed Dutch dairy goats but at a lower overall animal level seroprevalence than outdoor farmed goats in other European countries, and cat exposure is an important risk factor.
del Socorro Herrera, Miriam; Medina-Solis, Carlo Eduardo; Minaya-Sánchez, Mirna; Pontigo-Loyola, América Patricia; Villalobos-Rodelo, Juan José; Islas-Granillo, Horacio; de la Rosa-Santillana, Rubén; Maupomé, Gerardo
2013-01-01
Background Our study aimed to evaluate the effect of various risk indicators for dental caries on primary teeth of Nicaraguan children (from Leon, Nicaragua) ages 6 to 9, using the negative binomial regression model. Material/Methods A cross-sectional study was carried out to collect clinical, demographic, socioeconomic, and behavioral data from 794 schoolchildren ages 6 to 9 years, randomly selected from 25 schools in the city of León, Nicaragua. Clinical examinations for dental caries (dmft index) were performed by 2 trained and standardized examiners. Socio-demographic, socioeconomic, and behavioral data were self-reported using questionnaires. Multivariate negative binomial regression (NBR) analysis was used. Results Mean age was 7.49±1.12 years. Boys accounted for 50.1% of the sample. Mean dmft was 3.54±3.13 and caries prevalence (dmft >0) was 77.6%. In the NBR multivariate model (p<0.05), for each year of age, the expected mean dmft decreased by 7.5%. Brushing teeth at least once a day and having received preventive dental care in the last year before data collection were associated with declines in the expected mean dmft by 19.5% and 69.6%, respectively. Presence of dental plaque increased the expected mean dmft by 395.5%. Conclusions The proportion of students with caries in this sample was high. We found associations between dental caries in the primary dentition and dental plaque, brushing teeth at least once a day, and having received preventive dental care. To improve oral health, school programs and/or age-appropriate interventions need to be developed based on the specific profile of caries experience and the associated risk indicators. PMID:24247119
Building test data from real outbreaks for evaluating detection algorithms.
Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.
Building test data from real outbreaks for evaluating detection algorithms
Texier, Gaetan; Jackson, Michael L.; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve
2017-01-01
Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method—ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals. PMID:28863159
Herrera, Miriam del Socorro; Medina-Solís, Carlo Eduardo; Minaya-Sánchez, Mirna; Pontigo-Loyola, América Patricia; Villalobos-Rodelo, Juan José; Islas-Granillo, Horacio; de la Rosa-Santillana, Rubén; Maupomé, Gerardo
2013-11-19
Our study aimed to evaluate the effect of various risk indicators for dental caries on primary teeth of Nicaraguan children (from Leon, Nicaragua) ages 6 to 9, using the negative binomial regression model. A cross-sectional study was carried out to collect clinical, demographic, socioeconomic, and behavioral data from 794 schoolchildren ages 6 to 9 years, randomly selected from 25 schools in the city of León, Nicaragua. Clinical examinations for dental caries (dmft index) were performed by 2 trained and standardized examiners. Socio-demographic, socioeconomic, and behavioral data were self-reported using questionnaires. Multivariate negative binomial regression (NBR) analysis was used. Mean age was 7.49 ± 1.12 years. Boys accounted for 50.1% of the sample. Mean dmft was 3.54 ± 3.13 and caries prevalence (dmft >0) was 77.6%. In the NBR multivariate model (p<0.05), for each year of age, the expected mean dmft decreased by 7.5%. Brushing teeth at least once a day and having received preventive dental care in the last year before data collection were associated with declines in the expected mean dmft by 19.5% and 69.6%, respectively. Presence of dental plaque increased the expected mean dmft by 395.5%. The proportion of students with caries in this sample was high. We found associations between dental caries in the primary dentition and dental plaque, brushing teeth at least once a day, and having received preventive dental care. To improve oral health, school programs and/or age-appropriate interventions need to be developed based on the specific profile of caries experience and the associated risk indicators.
A comparison of sequential and spiral scanning techniques in brain CT.
Pace, Ivana; Zarb, Francis
2015-01-01
To evaluate and compare image quality and radiation dose of sequential computed tomography (CT) examinations of the brain and spiral CT examinations of the brain imaged on a GE HiSpeed NX/I Dual Slice 2CT scanner. A random sample of 40 patients referred for CT examination of the brain was selected and divided into 2 groups. Half of the patients were scanned using the sequential technique; the other half were scanned using the spiral technique. Radiation dose data—both the computed tomography dose index (CTDI) and the dose length product (DLP)—were recorded on a checklist at the end of each examination. Using the European Guidelines on Quality Criteria for Computed Tomography, 4 radiologists conducted a visual grading analysis and rated the level of visibility of 6 anatomical structures considered necessary to produce images of high quality. The mean CTDI(vol) and DLP values were statistically significantly higher (P <.05) with the sequential scans (CTDI(vol): 22.06 mGy; DLP: 304.60 mGy • cm) than with the spiral scans (CTDI(vol): 14.94 mGy; DLP: 229.10 mGy • cm). The mean image quality rating scores for all criteria of the sequential scanning technique were statistically significantly higher (P <.05) in the visual grading analysis than those of the spiral scanning technique. In this local study, the sequential technique was preferred over the spiral technique for both overall image quality and differentiation between gray and white matter in brain CT scans. Other similar studies counter this finding. The radiation dose seen with the sequential CT scanning technique was significantly higher than that seen with the spiral CT scanning technique. However, image quality with the sequential technique was statistically significantly superior (P <.05).
NASA Astrophysics Data System (ADS)
Nystrøm, G. M.; Ottosen, L. M.; Villumsen, A.
2003-05-01
In this work sequential extraction is performed with harbour sediment in order to evaluate the electrodialytic remediation potential for harbour sediments. Sequential extraction was performed on a sample of Norwegian harbour sediment; with the original sediment and after the sediment was treated with acid. The results from the sequential extraction show that 75% Zn and Pb and about 50% Cu are found in the most mobile phases in the original sediment and more than 90% Zn and Pb and 75% Cu are found in the most mobile phase in the sediment treated with acid. Electrodialytic remediation experiments were made. The method uses a low direct current as cleaning agent, removing the heavy metals towards the anode and cathode according to the charge of the heavy metals in the electric field. The electrodialytic experiments show that up to 50% Cu, 85% Zn and 60% Pb can be removed after 20 days. Thus, there is still a potential for a higher removal, with some changes in the experimental set-up and longer remediation time. The experiments show that thc use of sequential extraction can be used to predict the electrodialytic remediation potential for harbour sediments.
Intra-storm variability and soluble fractionation was explored for summer-time rain events in Steubenville, Ohio to evaluate the physical processes controlling mercury (Hg) in wet deposition in this industrialized region. Comprehensive precipitation sample collection was conducte...
NASA Astrophysics Data System (ADS)
Peña-Vázquez, E.; Barciela-Alonso, M. C.; Pita-Calvo, C.; Domínguez-González, R.; Bermejo-Barrera, P.
2015-09-01
The objective of this work is to develop a method for the determination of metals in saline matrices using high-resolution continuum source flame atomic absorption spectrometry (HR-CS FAAS). Module SFS 6 for sample injection was used in the manual mode, and flame operating conditions were selected. The main absorption lines were used for all the elements, and the number of selected analytical pixels were 5 (CP±2) for Cd, Cu, Fe, Ni, Pb and Zn, and 3 pixels for Mn (CP±1). Samples were acidified (0.5% (v/v) nitric acid), and the standard addition method was used for the sequential determination of the analytes in diluted samples (1:2). The method showed good precision (RSD(%) < 4%, except for Pb (6.5%)) and good recoveries. Accuracy was checked after the analysis of an SPS-WW2 wastewater reference material diluted with synthetic seawater (dilution 1:2), showing a good agreement between certified and experimental results.
Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S
2014-06-01
Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.
Structural characterization of polysaccharides from bamboo
NASA Astrophysics Data System (ADS)
Kamil, Ruzaimah Nik Mohamad; Yusuf, Nur'aini Raman; Yunus, Normawati M.; Yusup, Suzana
2014-10-01
The alkaline and water soluble polysaccharides were isolate by sequential extractions with distilled water, 60% ethanol containing 1%, 5% and 8% NaOH. The samples were prepared at 60 °C for 3 h from local bamboo. The functional group of the sample were examined using FTIR analysis. The most precipitate obtained is from using 60% ethanol containing 8% NaOH with yield of 2.6%. The former 3 residues isolated by sequential extractions with distilled water, 60% ethanol containing 1% and 5% NaOH are barely visible after filtering with cellulose filter paper. The FTIR result showed that the water-soluble polysaccharides consisted mainly of OH group, C
Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li
2015-10-01
We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-29
In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less
Some sequential, distribution-free pattern classification procedures with applications
NASA Technical Reports Server (NTRS)
Poage, J. L.
1971-01-01
Some sequential, distribution-free pattern classification techniques are presented. The decision problem to which the proposed classification methods are applied is that of discriminating between two kinds of electroencephalogram responses recorded from a human subject: spontaneous EEG and EEG driven by a stroboscopic light stimulus at the alpha frequency. The classification procedures proposed make use of the theory of order statistics. Estimates of the probabilities of misclassification are given. The procedures were tested on Gaussian samples and the EEG responses.
Analysis of railroad tank car releases using a generalized binomial model.
Liu, Xiang; Hong, Yili
2015-11-01
The United States is experiencing an unprecedented boom in shale oil production, leading to a dramatic growth in petroleum crude oil traffic by rail. In 2014, U.S. railroads carried over 500,000 tank carloads of petroleum crude oil, up from 9500 in 2008 (a 5300% increase). In light of continual growth in crude oil by rail, there is an urgent national need to manage this emerging risk. This need has been underscored in the wake of several recent crude oil release incidents. In contrast to highway transport, which usually involves a tank trailer, a crude oil train can carry a large number of tank cars, having the potential for a large, multiple-tank-car release incident. Previous studies exclusively assumed that railroad tank car releases in the same train accident are mutually independent, thereby estimating the number of tank cars releasing given the total number of tank cars derailed based on a binomial model. This paper specifically accounts for dependent tank car releases within a train accident. We estimate the number of tank cars releasing given the number of tank cars derailed based on a generalized binomial model. The generalized binomial model provides a significantly better description for the empirical tank car accident data through our numerical case study. This research aims to provide a new methodology and new insights regarding the further development of risk management strategies for improving railroad crude oil transportation safety. Copyright © 2015 Elsevier Ltd. All rights reserved.
Solar San Diego: The Impact of Binomial Rate Structures on Real PV-Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Geet, O.; Brown, E.; Blair, T.
2008-01-01
There is confusion in the marketplace regarding the impact of solar photovoltaics (PV) on the user's actual electricity bill under California Net Energy Metering, particularly with binomial tariffs (those that include both demand and energy charges) and time-of-use (TOU) rate structures. The City of San Diego has extensive real-time electrical metering on most of its buildings and PV systems, with interval data for overall consumption and PV electrical production available for multiple years. This paper uses 2007 PV-system data from two city facilities to illustrate the impacts of binomial rate designs. The analysis will determine the energy and demand savingsmore » that the PV systems are achieving relative to the absence of systems. A financial analysis of PV-system performance under various rates structures is presented. The data revealed that actual demand and energy use benefits of bionomial tariffs increase in summer months, when solar resources allow for maximized electricity production. In a binomial tariff system, varying on- and semi-peak times can result in approximately $1,100 change in demand charges per month over not having a PV system in place, an approximate 30% cost savings. The PV systems are also shown to have a 30%-50% reduction in facility energy charges in 2007. Future work will include combining demand and electricity charges and increasing the breadth of rate structures tested, including the impacts of non-coincident demand charges.« less
Analysis of multiple tank car releases in train accidents.
Liu, Xiang; Liu, Chang; Hong, Yili
2017-10-01
There are annually over two million carloads of hazardous materials transported by rail in the United States. The American railroads use large blocks of tank cars to transport petroleum crude oil and other flammable liquids from production to consumption sites. Being different from roadway transport of hazardous materials, a train accident can potentially result in the derailment and release of multiple tank cars, which may result in significant consequences. The prior literature predominantly assumes that the occurrence of multiple tank car releases in a train accident is a series of independent Bernoulli processes, and thus uses the binomial distribution to estimate the total number of tank car releases given the number of tank cars derailing or damaged. This paper shows that the traditional binomial model can incorrectly estimate multiple tank car release probability by magnitudes in certain circumstances, thereby significantly affecting railroad safety and risk analysis. To bridge this knowledge gap, this paper proposes a novel, alternative Correlated Binomial (CB) model that accounts for the possible correlations of multiple tank car releases in the same train. We test three distinct correlation structures in the CB model, and find that they all outperform the conventional binomial model based on empirical tank car accident data. The analysis shows that considering tank car release correlations would result in a significantly improved fit of the empirical data than otherwise. Consequently, it is prudent to consider alternative modeling techniques when analyzing the probability of multiple tank car releases in railroad accidents. Copyright © 2017 Elsevier Ltd. All rights reserved.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
Towards efficient multi-scale methods for monitoring sugarcane aphid infestations in sorghum
USDA-ARS?s Scientific Manuscript database
We discuss approaches and issues involved with developing optimal monitoring methods for sugarcane aphid infestations (SCA) in grain sorghum. We discuss development of sequential sampling methods that allow for estimation of the number of aphids per sample unit, and statistical decision making rela...
Precipitation as a chemical and meteorological phenomenon
Francis J. Berlandi; Donald G. Muldoon; Harvey S. Rosenblum; Lloyd L. Schulman
1976-01-01
Sequential rain and snow sampling has been performed at Burlington and Concord, Massachusetts. The samples have been collected during 1974 and 1975 in one-quarter inch and one inch rain equivalents and chemical analysis performed on the aliquotes. Meteorological data was documented at the time of collection.
Probing the statistics of primordial fluctuations and their evolution
NASA Technical Reports Server (NTRS)
Gaztanaga, Enrique; Yokoyama, Jun'ichi
1993-01-01
The statistical distribution of fluctuations on various scales is analyzed in terms of the counts in cells of smoothed density fields, using volume-limited samples of galaxy redshift catalogs. It is shown that the distribution on large scales, with volume average of the two-point correlation function of the smoothed field less than about 0.05, is consistent with Gaussian. Statistics are shown to agree remarkably well with the negative binomial distribution, which has hierarchial correlations and a Gaussian behavior at large scales. If these observed properties correspond to the matter distribution, they suggest that our universe started with Gaussian fluctuations and evolved keeping hierarchial form.
Definite Integrals, Some Involving Residue Theory Evaluated by Maple Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, Kimiko o
2010-01-01
The calculus of residue is applied to evaluate certain integrals in the range (-{infinity} to {infinity}) using the Maple symbolic code. These integrals are of the form {integral}{sub -{infinity}}{sup {infinity}} cos(x)/[(x{sup 2} + a{sup 2})(x{sup 2} + b{sup 2}) (x{sup 2} + c{sup 2})]dx and similar extensions. The Maple code is also applied to expressions in maximum likelihood estimator moments when sampling from the negative binomial distribution. In general the Maple code approach to the integrals gives correct answers to specified decimal places, but the symbolic result may be extremely long and complex.
de Souza, Pedrita Mara do Espírito Santo; Mello Proença, Mariana Almeida; Franco, Mayra Moura; Rodrigues, Vandilson Pinheiro; Costa, José Ferreira; Costa, Elizabeth Lima
2015-01-01
Objective: This study aims to evaluate the association between early childhood caries (ECC) and maternal caries status, and the maternal perception of ECC risk factors. Materials and Methods: A cross-sectional study was carried out with 77 mother-child pairs, the children ranging from 12 to 36 months of age and their mothers, who were seeking dental care at a health center in São Luís, Maranhão, Brazil. Data collection was conducted using a specific questionnaire for mothers. Oral clinical examination of the mother-child binomial to assess caries incidence, gingival bleeding (GB) and visible plaque was done. Home visits were performed in 10% of the sample in order to observe the environmental conditions, dietary habits and dental hygiene practices. Results: The findings showed that the caries prevalence in children was 22.5 times higher in the mother who had decayed tooth (prevalence ratio [PR] = 22.5, confidence interval [CI] 95% = 3.2–156.6, P < 0.001). GB also was observed in 14 mothers and children, the PR in pair was 12.2 (CI95% = 1.6–88.9, P < 0.001). The variables are related for the mother-child binomial in regression linear analysis. Conclusion: The maternal caries status was associated with ECC. PMID:25713495
Xiao, Chuan-Le; Chen, Xiao-Zhou; Du, Yang-Li; Sun, Xuesong; Zhang, Gong; He, Qing-Yu
2013-01-04
Mass spectrometry has become one of the most important technologies in proteomic analysis. Tandem mass spectrometry (LC-MS/MS) is a major tool for the analysis of peptide mixtures from protein samples. The key step of MS data processing is the identification of peptides from experimental spectra by searching public sequence databases. Although a number of algorithms to identify peptides from MS/MS data have been already proposed, e.g. Sequest, OMSSA, X!Tandem, Mascot, etc., they are mainly based on statistical models considering only peak-matches between experimental and theoretical spectra, but not peak intensity information. Moreover, different algorithms gave different results from the same MS data, implying their probable incompleteness and questionable reproducibility. We developed a novel peptide identification algorithm, ProVerB, based on a binomial probability distribution model of protein tandem mass spectrometry combined with a new scoring function, making full use of peak intensity information and, thus, enhancing the ability of identification. Compared with Mascot, Sequest, and SQID, ProVerB identified significantly more peptides from LC-MS/MS data sets than the current algorithms at 1% False Discovery Rate (FDR) and provided more confident peptide identifications. ProVerB is also compatible with various platforms and experimental data sets, showing its robustness and versatility. The open-source program ProVerB is available at http://bioinformatics.jnu.edu.cn/software/proverb/ .
Zhang, Changsheng; Cai, Hongmin; Huang, Jingying; Song, Yan
2016-09-17
Variations in DNA copy number have an important contribution to the development of several diseases, including autism, schizophrenia and cancer. Single-cell sequencing technology allows the dissection of genomic heterogeneity at the single-cell level, thereby providing important evolutionary information about cancer cells. In contrast to traditional bulk sequencing, single-cell sequencing requires the amplification of the whole genome of a single cell to accumulate enough samples for sequencing. However, the amplification process inevitably introduces amplification bias, resulting in an over-dispersing portion of the sequencing data. Recent study has manifested that the over-dispersed portion of the single-cell sequencing data could be well modelled by negative binomial distributions. We developed a read-depth based method, nbCNV to detect the copy number variants (CNVs). The nbCNV method uses two constraints-sparsity and smoothness to fit the CNV patterns under the assumption that the read signals are negatively binomially distributed. The problem of CNV detection was formulated as a quadratic optimization problem, and was solved by an efficient numerical solution based on the classical alternating direction minimization method. Extensive experiments to compare nbCNV with existing benchmark models were conducted on both simulated data and empirical single-cell sequencing data. The results of those experiments demonstrate that nbCNV achieves superior performance and high robustness for the detection of CNVs in single-cell sequencing data.
Dorazio, R.M.; Jelks, H.L.; Jordan, F.
2005-01-01
A statistical modeling framework is described for estimating the abundances of spatially distinct subpopulations of animals surveyed using removal sampling. To illustrate this framework, hierarchical models are developed using the Poisson and negative-binomial distributions to model variation in abundance among subpopulations and using the beta distribution to model variation in capture probabilities. These models are fitted to the removal counts observed in a survey of a federally endangered fish species. The resulting estimates of abundance have similar or better precision than those computed using the conventional approach of analyzing the removal counts of each subpopulation separately. Extension of the hierarchical models to include spatial covariates of abundance is straightforward and may be used to identify important features of an animal's habitat or to predict the abundance of animals at unsampled locations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sigeti, David E.; Pelak, Robert A.
We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis withmore » an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a general beta-function prior for {theta}, enabling sequential analysis in which a small number of new simulations may be done and the resulting posterior for {theta} used as a prior to inform the next stage of power analysis.« less
A binomial stochastic kinetic approach to the Michaelis-Menten mechanism
NASA Astrophysics Data System (ADS)
Lente, Gábor
2013-05-01
This Letter presents a new method that gives an analytical approximation of the exact solution of the stochastic Michaelis-Menten mechanism without computationally demanding matrix operations. The method is based on solving the deterministic rate equations and then using the results as guiding variables of calculating probability values using binomial distributions. This principle can be generalized to a number of different kinetic schemes and is expected to be very useful in the evaluation of measurements focusing on the catalytic activity of one or a few individual enzyme molecules.
Taxonomy of the order Mononegavirales: update 2017.
Amarasinghe, Gaya K; Bào, Yīmíng; Basler, Christopher F; Bavari, Sina; Beer, Martin; Bejerman, Nicolás; Blasdell, Kim R; Bochnowski, Alisa; Briese, Thomas; Bukreyev, Alexander; Calisher, Charles H; Chandran, Kartik; Collins, Peter L; Dietzgen, Ralf G; Dolnik, Olga; Dürrwald, Ralf; Dye, John M; Easton, Andrew J; Ebihara, Hideki; Fang, Qi; Formenty, Pierre; Fouchier, Ron A M; Ghedin, Elodie; Harding, Robert M; Hewson, Roger; Higgins, Colleen M; Hong, Jian; Horie, Masayuki; James, Anthony P; Jiāng, Dàohóng; Kobinger, Gary P; Kondo, Hideki; Kurath, Gael; Lamb, Robert A; Lee, Benhur; Leroy, Eric M; Li, Ming; Maisner, Andrea; Mühlberger, Elke; Netesov, Sergey V; Nowotny, Norbert; Patterson, Jean L; Payne, Susan L; Paweska, Janusz T; Pearson, Michael N; Randall, Rick E; Revill, Peter A; Rima, Bertus K; Rota, Paul; Rubbenstroth, Dennis; Schwemmle, Martin; Smither, Sophie J; Song, Qisheng; Stone, David M; Takada, Ayato; Terregino, Calogero; Tesh, Robert B; Tomonaga, Keizo; Tordo, Noël; Towner, Jonathan S; Vasilakis, Nikos; Volchkov, Viktor E; Wahl-Jensen, Victoria; Walker, Peter J; Wang, Beibei; Wang, David; Wang, Fei; Wang, Lin-Fa; Werren, John H; Whitfield, Anna E; Yan, Zhichao; Ye, Gongyin; Kuhn, Jens H
2017-08-01
In 2017, the order Mononegavirales was expanded by the inclusion of a total of 69 novel species. Five new rhabdovirus genera and one new nyamivirus genus were established to harbor 41 of these species, whereas the remaining new species were assigned to already established genera. Furthermore, non-Latinized binomial species names replaced all paramyxovirus and pneumovirus species names, thereby accomplishing application of binomial species names throughout the entire order. This article presents the updated taxonomy of the order Mononegavirales as now accepted by the International Committee on Taxonomy of Viruses (ICTV).
Categorical Data Analysis Using a Skewed Weibull Regression Model
NASA Astrophysics Data System (ADS)
Caron, Renault; Sinha, Debajyoti; Dey, Dipak; Polpo, Adriano
2018-03-01
In this paper, we present a Weibull link (skewed) model for categorical response data arising from binomial as well as multinomial model. We show that, for such types of categorical data, the most commonly used models (logit, probit and complementary log-log) can be obtained as limiting cases. We further compare the proposed model with some other asymmetrical models. The Bayesian as well as frequentist estimation procedures for binomial and multinomial data responses are presented in details. The analysis of two data sets to show the efficiency of the proposed model is performed.
NASA Astrophysics Data System (ADS)
Izzati, Munifatul; Haryanti, Sri; Parman, Sarjana
2018-05-01
Gracilaria widely known as a source of essential trace elements. However this red seaweeds also has great potential for being developed into commercial products. This study examined the sequential pattern of essential trace elements composition in fresh Gracilaria verrucosa and a selection of its generated products, nemely extracted agar, Gracilaria salt and Gracilaria residue. The sample was collected from a brackish water pond, located in north part Semarang, Central Java. The collected sample was then dried under the sun, and subsequently processed into aformentioned generated products. The Gracilaria salt was obtain by soaking the sun dried Gracilaria overnight in fresh water overnight. The resulted salt solution was then boiled leaving crystal salt. Extracted agar was obtained with alkali agar extraction method. The rest of remaining material was considered as Gracilaria residue. The entire process was repeated 3 times. The compositin of trace elements was examined using ICP-MS Spectrometry. Collected data was then analyzed by ANOVA single factor. Resulting sequential pattern of its essential trace elements composition was compared. A regular table salt was used as controls. Resuts from this study revealed that Gracilaria verrucosa and its all generated products all have similarly patterned the composition of essential trace elements, where Mn>Zn>Cu>Mo. Additionally this pattern is similar to different subspecies of Gracilaria from different location and and different season. However, Gracilaria salt has distinctly different pattern of sequential essential trace elements composition compared to table salt.
Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.
ERIC Educational Resources Information Center
Glutting, Joseph J.; McDermott, Paul A.
1990-01-01
Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…
Depression and Delinquency Covariation in an Accelerated Longitudinal Sample of Adolescents
ERIC Educational Resources Information Center
Kofler, Michael J.; McCart, Michael R.; Zajac, Kristyn; Ruggiero, Kenneth J.; Saunders, Benjamin E.; Kilpatrick, Dean G.
2011-01-01
Objectives: The current study tested opposing predictions stemming from the failure and acting out theories of depression-delinquency covariation. Method: Participants included a nationwide longitudinal sample of adolescents (N = 3,604) ages 12 to 17. Competing models were tested with cohort-sequential latent growth curve modeling to determine…
USDA-ARS?s Scientific Manuscript database
This research presents a sensitive and confirmatory multi-residue method for mequindox (MEQ), quinocetone (QCT), and their 11 metabolites in chicken and pork samples. After extracted with acetonitrile-ethyl acetate, acidulated, and extracted again with ethyl acetate sequentially, each sample was pu...
Thermogravimetric and differential thermal analysis of potassium bicarbonate contaminated cellulose
A. Broido
1966-01-01
When samples undergo a complicated set of simultaneous and sequential reactions, as cellulose does on heating, results of thermogravimetric and differential thermal analyses are difficult to interpret. Nevertheless, careful comparison of pure and contaminated samples, pyrolyzed under identical conditions, can yield useful information. In these experiments TGA and DTA...
ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, PCB DETECTION TECHNOLOGY, HYBRIZYME DELFIA TM ASSAY
The DELFIA PCB Assay is a solid-phase time-resolved fluoroimmunoassay based on the sequential addition of sample extract and europium-labeled PCB tracer to a monoclonal antibody reagent specific for PCBs. In this assay, the antibody reagent and sample extract are added to a strip...
Astolfi, Maria Luisa; Di Filippo, Patrizia; Gentili, Alessandra; Canepari, Silvia
2017-11-01
We describe the optimization and validation of a sequential extractive method for the determination of the polycyclic aromatic hydrocarbons (PAHs) and elements (Al, As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, Se, V and Zn) that are chemically fractionated into bio-accessible and mineralized residual fractions on a single particulate matter filter. The extraction is performed by automatic accelerated solvent extraction (ASE); samples are sequentially treated with dichloromethane/acetone (4:1) for PAHs extraction and acetate buffer (0.01M; pH 4.5) for elements extraction (bio-accessible fraction). The remaining solid sample is then collected and subjected to acid digestion with HNO 3 :H 2 O 2 (2:1) to determine the mineralized residual element fraction. We also describe a homemade ASE cell that reduces the blank values for most elements; in this cell, the steel frit was replaced by a Teflon pierced disk and a Teflon cylinder was used as the filler. The performance of the proposed method was evaluated in terms of recovery from standard reference material (SRM 1648 and SRM 1649a) and repeatability. The equivalence between the new ASE method and conventional methods was verified for PAHs and for bio-accessible and mineralized residual fractions of elements on PM 10 twin filters. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Scherer, Aaron M.; Windschitl, Paul D.; O'Rourke, Jillian; Smith, Andrew R.
2012-01-01
People must often engage in sequential sampling in order to make predictions about the relative quantities of two options. We investigated how directional motives influence sampling selections and resulting predictions in such cases. We used a paradigm in which participants had limited time to sample items and make predictions about which side of…
ERIC Educational Resources Information Center
Cizdziel, James V.
2011-01-01
In this laboratory experiment, students quantitatively determine the concentration of an element (mercury) in an environmental or biological sample while comparing and contrasting the fundamental techniques of atomic absorption spectrometry (AAS) and atomic fluorescence spectrometry (AFS). A mercury analyzer based on sample combustion,…
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Identifying High-Rate Flows Based on Sequential Sampling
NASA Astrophysics Data System (ADS)
Zhang, Yu; Fang, Binxing; Luo, Hao
We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.
Development of sampling plans for cotton bolls injured by stink bugs (Hemiptera: Pentatomidae).
Reay-Jones, F P F; Toews, M D; Greene, J K; Reeves, R B
2010-04-01
Cotton, Gossypium hirsutum L., bolls were sampled in commercial fields for stink bug (Hemiptera: Pentatomidae) injury during 2007 and 2008 in South Carolina and Georgia. Across both years of this study, boll-injury percentages averaged 14.8 +/- 0.3 (SEM). At average boll injury treatment levels of 10, 20, 30, and 50%, the percentage of samples with at least one injured boll was 82, 97, 100, and 100%, respectively. Percentage of field-sampling date combinations with average injury < 10, 20, 30, and 50% was 35, 80, 95, and 99%, respectively. At the average of 14.8% boll injury or 2.9 injured bolls per 20-boll sample, 112 samples at Dx = 0.1 (within 10% of the mean) were required for population estimation, compared with only 15 samples at Dx = 0.3. Using a sample size of 20 bolls, our study indicated that, at the 10% threshold and alpha = beta = 0.2 (with 80% confidence), control was not needed when <1.03 bolls were injured. The sampling plan required continued sampling for a range of 1.03-3.8 injured bolls per 20-boll sample. Only when injury was > 3.8 injured bolls per 20-boll sample was a control measure needed. Sequential sampling plans were also determined for thresholds of 20, 30, and 50% injured bolls. Sample sizes for sequential sampling plans were significantly reduced when compared with a fixed sampling plan (n=10) for all thresholds and error rates.
Pi, Fengmei; Vieweger, Mario; Zhao, Zhengyi; Wang, Shaoying; Guo, Peixuan
2015-01-01
Introduction Multidrug resistance and the appearance of incurable diseases inspire the quest for potent therapeutics. Areas Covered We review a new methodology in designing potent drugs by targeting multi-subunit homomeric biological motors, machines, or complexes with Z>1 and K=1, where Z is the stoichiometry of the target, and K is the number of drugged subunits required to block the function of the complex. The condition is similar to a series, electrical circuit of Christmas decorations; failure of one light bulb causes the entire lighting system to lose power. In most multisubunit, homomeric biological systems, a sequential coordination or cooperative action mechanism is utilized, thus K equals 1. Drug inhibition depends on the ratio of drugged to nondrugged complexes. When K=1, and Z>1, the inhibition effect follows a power law with respect to Z, leading to enhanced drug potency. The hypothesis that the potency of drug inhibition depends on the stoichiometry of the targeted biological complexes was recently quantified by Yang-Hui's Triangle (or binomial distribution), and proved using a highly sensitive in vitro phi29 viral DNA packaging system. Examples of targeting homomeric bio-complexes with high stoichiometry for potent drug discovery are discussed. Expert Opinion Biomotors with multiple subunits are widespread in viruses, bacteria, and cells, making this approach generally applicable in the development of inhibition drugs with high efficiency. PMID:26307193
Using beta binomials to estimate classification uncertainty for ensemble models.
Clark, Robert D; Liang, Wenkel; Lee, Adam C; Lawless, Michael S; Fraczkiewicz, Robert; Waldman, Marvin
2014-01-01
Quantitative structure-activity (QSAR) models have enormous potential for reducing drug discovery and development costs as well as the need for animal testing. Great strides have been made in estimating their overall reliability, but to fully realize that potential, researchers and regulators need to know how confident they can be in individual predictions. Submodels in an ensemble model which have been trained on different subsets of a shared training pool represent multiple samples of the model space, and the degree of agreement among them contains information on the reliability of ensemble predictions. For artificial neural network ensembles (ANNEs) using two different methods for determining ensemble classification - one using vote tallies and the other averaging individual network outputs - we have found that the distribution of predictions across positive vote tallies can be reasonably well-modeled as a beta binomial distribution, as can the distribution of errors. Together, these two distributions can be used to estimate the probability that a given predictive classification will be in error. Large data sets comprised of logP, Ames mutagenicity, and CYP2D6 inhibition data are used to illustrate and validate the method. The distributions of predictions and errors for the training pool accurately predicted the distribution of predictions and errors for large external validation sets, even when the number of positive and negative examples in the training pool were not balanced. Moreover, the likelihood of a given compound being prospectively misclassified as a function of the degree of consensus between networks in the ensemble could in most cases be estimated accurately from the fitted beta binomial distributions for the training pool. Confidence in an individual predictive classification by an ensemble model can be accurately assessed by examining the distributions of predictions and errors as a function of the degree of agreement among the constituent submodels. Further, ensemble uncertainty estimation can often be improved by adjusting the voting or classification threshold based on the parameters of the error distribution. Finally, the profiles for models whose predictive uncertainty estimates are not reliable provide clues to that effect without the need for comparison to an external test set.
Zipkin, Elise F.; Leirness, Jeffery B.; Kinlan, Brian P.; O'Connell, Allan F.; Silverman, Emily D.
2014-01-01
Determining appropriate statistical distributions for modeling animal count data is important for accurate estimation of abundance, distribution, and trends. In the case of sea ducks along the U.S. Atlantic coast, managers want to estimate local and regional abundance to detect and track population declines, to define areas of high and low use, and to predict the impact of future habitat change on populations. In this paper, we used a modified marked point process to model survey data that recorded flock sizes of Common eiders, Long-tailed ducks, and Black, Surf, and White-winged scoters. The data come from an experimental aerial survey, conducted by the United States Fish & Wildlife Service (USFWS) Division of Migratory Bird Management, during which east-west transects were flown along the Atlantic Coast from Maine to Florida during the winters of 2009–2011. To model the number of flocks per transect (the points), we compared the fit of four statistical distributions (zero-inflated Poisson, zero-inflated geometric, zero-inflated negative binomial and negative binomial) to data on the number of species-specific sea duck flocks that were recorded for each transect flown. To model the flock sizes (the marks), we compared the fit of flock size data for each species to seven statistical distributions: positive Poisson, positive negative binomial, positive geometric, logarithmic, discretized lognormal, zeta and Yule–Simon. Akaike’s Information Criterion and Vuong’s closeness tests indicated that the negative binomial and discretized lognormal were the best distributions for all species for the points and marks, respectively. These findings have important implications for estimating sea duck abundances as the discretized lognormal is a more skewed distribution than the Poisson and negative binomial, which are frequently used to model avian counts; the lognormal is also less heavy-tailed than the power law distributions (e.g., zeta and Yule–Simon), which are becoming increasingly popular for group size modeling. Choosing appropriate statistical distributions for modeling flock size data is fundamental to accurately estimating population summaries, determining required survey effort, and assessing and propagating uncertainty through decision-making processes.
Simultaneous sequential monitoring of efficacy and safety led to masking of effects.
van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg
2016-08-01
Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Guo, Yiqing; Jia, Xiuping; Paull, David
2018-06-01
The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.
Su, Cheng-Kuan; Tseng, Po-Jen; Chiu, Hsien-Ting; Del Vall, Andrea; Huang, Yu-Fen; Sun, Yuh-Chang
2017-03-01
Probing tumor extracellular metabolites is a vitally important issue in current cancer biology. In this study an analytical system was constructed for the in vivo monitoring of mouse tumor extracellular hydrogen peroxide (H 2 O 2 ), lactate, and glucose by means of microdialysis (MD) sampling and fluorescence determination in conjunction with a smart sequential enzymatic derivatization scheme-involving a loading sequence of fluorogenic reagent/horseradish peroxidase, microdialysate, lactate oxidase, pyruvate, and glucose oxidase-for step-by-step determination of sampled H 2 O 2 , lactate, and glucose in mouse tumor microdialysate. After optimization of the overall experimental parameters, the system's detection limit reached as low as 0.002 mM for H 2 O 2 , 0.058 mM for lactate, and 0.055 mM for glucose, based on 3 μL of microdialysate, suggesting great potential for determining tumor extracellular concentrations of lactate and glucose. Spike analyses of offline-collected mouse tumor microdialysate and monitoring of the basal concentrations of mouse tumor extracellular H 2 O 2 , lactate, and glucose, as well as those after imparting metabolic disturbance through intra-tumor administration of a glucose solution through a prior-implanted cannula, were conducted to demonstrate the system's applicability. Our results evidently indicate that hyphenation of an MD sampling device with an optimized sequential enzymatic derivatization scheme and a fluorescence spectrometer can be used successfully for multi-analyte monitoring of tumor extracellular metabolites in living animals. Copyright © 2016 Elsevier B.V. All rights reserved.
1984-09-01
and these accompanied the sample residue through sieving to avoid sample mix- up . B. Field data sheets required logger’s initials on each page to A...ensure data completeness. C. Metal trays were placed to catch residue spillage during residue transfer from sieves to sample bottles. D. Sample bottles...methodologies were comparable for all sample e types and consisted of four sequential components: extraction, clean- up , gas chromatographic (GC) analysis, and
Sileshi, G
2006-10-01
Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.
Wang, Xuezhi; Huang, Xiaotao; Suvorova, Sofia; Moran, Bill
2018-01-01
Golay complementary waveforms can, in theory, yield radar returns of high range resolution with essentially zero sidelobes. In practice, when deployed conventionally, while high signal-to-noise ratios can be achieved for static target detection, significant range sidelobes are generated by target returns of nonzero Doppler causing unreliable detection. We consider signal processing techniques using Golay complementary waveforms to improve radar detection performance in scenarios involving multiple nonzero Doppler targets. A signal processing procedure based on an existing, so called, Binomial Design algorithm that alters the transmission order of Golay complementary waveforms and weights the returns is proposed in an attempt to achieve an enhanced illumination performance. The procedure applies one of three proposed waveform transmission ordering algorithms, followed by a pointwise nonlinear processor combining the outputs of the Binomial Design algorithm and one of the ordering algorithms. The computational complexity of the Binomial Design algorithm and the three ordering algorithms are compared, and a statistical analysis of the performance of the pointwise nonlinear processing is given. Estimation of the areas in the Delay–Doppler map occupied by significant range sidelobes for given targets are also discussed. Numerical simulations for the comparison of the performances of the Binomial Design algorithm and the three ordering algorithms are presented for both fixed and randomized target locations. The simulation results demonstrate that the proposed signal processing procedure has a better detection performance in terms of lower sidelobes and higher Doppler resolution in the presence of multiple nonzero Doppler targets compared to existing methods. PMID:29324708
Some considerations for excess zeroes in substance abuse research.
Bandyopadhyay, Dipankar; DeSantis, Stacia M; Korte, Jeffrey E; Brady, Kathleen T
2011-09-01
Count data collected in substance abuse research often come with an excess of "zeroes," which are typically handled using zero-inflated regression models. However, there is a need to consider the design aspects of those studies before using such a statistical model to ascertain the sources of zeroes. We sought to illustrate hurdle models as alternatives to zero-inflated models to validate a two-stage decision-making process in situations of "excess zeroes." We use data from a study of 45 cocaine-dependent subjects where the primary scientific question was to evaluate whether study participation influences drug-seeking behavior. The outcome, "the frequency (count) of cocaine use days per week," is bounded (ranging from 0 to 7). We fit and compare binomial, Poisson, negative binomial, and the hurdle version of these models to study the effect of gender, age, time, and study participation on cocaine use. The hurdle binomial model provides the best fit. Gender and time are not predictive of use. Higher odds of use versus no use are associated with age; however once use is experienced, odds of further use decrease with increase in age. Participation was associated with higher odds of no-cocaine use; once there is use, participation reduced the odds of further use. Age and study participation are significantly predictive of cocaine-use behavior. The two-stage decision process as modeled by a hurdle binomial model (appropriate for bounded count data with excess zeroes) provides interesting insights into the study of covariate effects on count responses of substance use, when all enrolled subjects are believed to be "at-risk" of use.
Metaprop: a Stata command to perform meta-analysis of binomial data.
Nyaga, Victoria N; Arbyn, Marc; Aerts, Marc
2014-01-01
Meta-analyses have become an essential tool in synthesizing evidence on clinical and epidemiological questions derived from a multitude of similar studies assessing the particular issue. Appropriate and accessible statistical software is needed to produce the summary statistic of interest. Metaprop is a statistical program implemented to perform meta-analyses of proportions in Stata. It builds further on the existing Stata procedure metan which is typically used to pool effects (risk ratios, odds ratios, differences of risks or means) but which is also used to pool proportions. Metaprop implements procedures which are specific to binomial data and allows computation of exact binomial and score test-based confidence intervals. It provides appropriate methods for dealing with proportions close to or at the margins where the normal approximation procedures often break down, by use of the binomial distribution to model the within-study variability or by allowing Freeman-Tukey double arcsine transformation to stabilize the variances. Metaprop was applied on two published meta-analyses: 1) prevalence of HPV-infection in women with a Pap smear showing ASC-US; 2) cure rate after treatment for cervical precancer using cold coagulation. The first meta-analysis showed a pooled HPV-prevalence of 43% (95% CI: 38%-48%). In the second meta-analysis, the pooled percentage of cured women was 94% (95% CI: 86%-97%). By using metaprop, no studies with 0% or 100% proportions were excluded from the meta-analysis. Furthermore, study specific and pooled confidence intervals always were within admissible values, contrary to the original publication, where metan was used.
Sequential analysis in neonatal research-systematic review.
Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne
2018-05-01
As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).
Hansen, John P
2003-01-01
Healthcare quality improvement professionals need to understand and use inferential statistics to interpret sample data from their organizations. In quality improvement and healthcare research studies all the data from a population often are not available, so investigators take samples and make inferences about the population by using inferential statistics. This three-part series will give readers an understanding of the concepts of inferential statistics as well as the specific tools for calculating confidence intervals for samples of data. This article, Part 1, presents basic information about data including a classification system that describes the four major types of variables: continuous quantitative variable, discrete quantitative variable, ordinal categorical variable (including the binomial variable), and nominal categorical variable. A histogram is a graph that displays the frequency distribution for a continuous variable. The article also demonstrates how to calculate the mean, median, standard deviation, and variance for a continuous variable.
Movement of particles using sequentially activated dielectrophoretic particle trapping
Miles, Robin R.
2004-02-03
Manipulation of DNA and cells/spores using dielectrophoretic (DEP) forces to perform sample preparation protocols for polymerized chain reaction (PCR) based assays for various applications. This is accomplished by movement of particles using sequentially activated dielectrophoretic particle trapping. DEP forces induce a dipole in particles, and these particles can be trapped in non-uniform fields. The particles can be trapped in the high field strength region of one set of electrodes. By switching off this field and switching on an adjacent electrodes, particles can be moved down a channel with little or no flow.
Distribution pattern of public transport passenger in Yogyakarta, Indonesia
NASA Astrophysics Data System (ADS)
Narendra, Alfa; Malkhamah, Siti; Sopha, Bertha Maya
2018-03-01
The arrival and departure distribution pattern of Trans Jogja bus passenger is one of the fundamental model for simulation. The purpose of this paper is to build models of passengers flows. This research used passengers data from January to May 2014. There is no policy that change the operation system affecting the nature of this pattern nowadays. The roads, buses, land uses, schedule, and people are relatively still the same. The data then categorized based on the direction, days, and location. Moreover, each category was fitted into some well-known discrete distributions. Those distributions are compared based on its AIC value and BIC. The chosen distribution model has the smallest AIC and BIC value and the negative binomial distribution found has the smallest AIC and BIC value. Probability mass function (PMF) plots of those models were compared to draw generic model from each categorical negative binomial distribution models. The value of accepted generic negative binomial distribution is 0.7064 and 1.4504 of mu. The minimum and maximum passenger vector value of distribution are is 0 and 41.
Temporal acceleration of spatially distributed kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatterjee, Abhijit; Vlachos, Dionisios G.
The computational intensity of kinetic Monte Carlo (KMC) simulation is a major impediment in simulating large length and time scales. In recent work, an approximate method for KMC simulation of spatially uniform systems, termed the binomial {tau}-leap method, was introduced [A. Chatterjee, D.G. Vlachos, M.A. Katsoulakis, Binomial distribution based {tau}-leap accelerated stochastic simulation, J. Chem. Phys. 122 (2005) 024112], where molecular bundles instead of individual processes are executed over coarse-grained time increments. This temporal coarse-graining can lead to significant computational savings but its generalization to spatially lattice KMC simulation has not been realized yet. Here we extend the binomial {tau}-leapmore » method to lattice KMC simulations by combining it with spatially adaptive coarse-graining. Absolute stability and computational speed-up analyses for spatial systems along with simulations provide insights into the conditions where accuracy and substantial acceleration of the new spatio-temporal coarse-graining method are ensured. Model systems demonstrate that the r-time increment criterion of Chatterjee et al. obeys the absolute stability limit for values of r up to near 1.« less
O'Brien, Kathryn; Stanton, Naomi; Edwards, Adrian; Hood, Kerenza; Butler, Christopher C
2011-03-01
Due to the non-specific nature of symptoms of UTI in children and low levels of urine sampling, the prevalence of UTI amongst acutely ill children in primary care is unknown. To undertake an exploratory study of acutely ill children consulting in primary care, determine the feasibility of obtaining urine samples, and describe presenting symptoms and signs, and the proportion with UTI. Exploratory, observational study. Four general practices in South Wales. A total of 99 sequential attendees with acute illness aged less than five years. UTI defined by >10(5) organisms/ml on laboratory culture of urine. Urine samples were obtained in 75 (76%) children. Three (4%) met microbiological criteria for UTI. GPs indicated they would not normally have obtained urine samples in any of these three children. However, all had received antibiotics for suspected alternative infections. Urine sample collection is feasible from the majority of acutely ill children in primary care, including infants. Some cases of UTI may be missed if children thought to have an alternative site of infection are excluded from urine sampling. A larger study is needed to more accurately determine the prevalence of UTI in children consulting with acute illness in primary care, and to explore which symptoms and signs might help clinicians effectively target urine sampling.
Method and apparatus for telemetry adaptive bandwidth compression
NASA Technical Reports Server (NTRS)
Graham, Olin L.
1987-01-01
Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.
Button, Le; Peter, Beate; Stoel-Gammon, Carol; Raskind, Wendy H
2013-03-01
The purpose of this study was to address the hypothesis that childhood apraxia of speech (CAS) is influenced by an underlying deficit in sequential processing that is also expressed in other modalities. In a sample of 21 adults from five multigenerational families, 11 with histories of various familial speech sound disorders, 3 biologically related adults from a family with familial CAS showed motor sequencing deficits in an alternating motor speech task. Compared with the other adults, these three participants showed deficits in tasks requiring high loads of sequential processing, including nonword imitation, nonword reading and spelling. Qualitative error analyses in real word and nonword imitations revealed group differences in phoneme sequencing errors. Motor sequencing ability was correlated with phoneme sequencing errors during real word and nonword imitation, reading and spelling. Correlations were characterized by extremely high scores in one family and extremely low scores in another. Results are consistent with a central deficit in sequential processing in CAS of familial origin.
Devaluation and sequential decisions: linking goal-directed and model-based behavior
Friedel, Eva; Koch, Stefan P.; Wendt, Jean; Heinz, Andreas; Deserno, Lorenz; Schlagenhauf, Florian
2014-01-01
In experimental psychology different experiments have been developed to assess goal–directed as compared to habitual control over instrumental decisions. Similar to animal studies selective devaluation procedures have been used. More recently sequential decision-making tasks have been designed to assess the degree of goal-directed vs. habitual choice behavior in terms of an influential computational theory of model-based compared to model-free behavioral control. As recently suggested, different measurements are thought to reflect the same construct. Yet, there has been no attempt to directly assess the construct validity of these different measurements. In the present study, we used a devaluation paradigm and a sequential decision-making task to address this question of construct validity in a sample of 18 healthy male human participants. Correlational analysis revealed a positive association between model-based choices during sequential decisions and goal-directed behavior after devaluation suggesting a single framework underlying both operationalizations and speaking in favor of construct validity of both measurement approaches. Up to now, this has been merely assumed but never been directly tested in humans. PMID:25136310
The parallel-sequential field subtraction techniques for nonlinear ultrasonic imaging
NASA Astrophysics Data System (ADS)
Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.
2018-04-01
Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage and have sensitivity to particularly closed defects. This study utilizes two modes of focusing: parallel, in which the elements are fired together with a delay law, and sequential, in which elements are fired independently. In the parallel focusing, a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded; with elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images formed from the coherent component of the field and use this to characterize nonlinearity of closed fatigue cracks. In particular we monitor the reduction in amplitude at the fundamental frequency at each focal point and use this metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g., back wall or large scatters) and allow damage to be detected at an early stage.
Judgments relative to patterns: how temporal sequence patterns affect judgments and memory.
Kusev, Petko; Ayton, Peter; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Stewart, Neil; Chater, Nick
2011-12-01
Six experiments studied relative frequency judgment and recall of sequentially presented items drawn from 2 distinct categories (i.e., city and animal). The experiments show that judged frequencies of categories of sequentially encountered stimuli are affected by certain properties of the sequence configuration. We found (a) a first-run effect whereby people overestimated the frequency of a given category when that category was the first repeated category to occur in the sequence and (b) a dissociation between judgments and recall; respondents may judge 1 event more likely than the other and yet recall more instances of the latter. Specifically, the distribution of recalled items does not correspond to the frequency estimates for the event categories, indicating that participants do not make frequency judgments by sampling their memory for individual items as implied by other accounts such as the availability heuristic (Tversky & Kahneman, 1973) and the availability process model (Hastie & Park, 1986). We interpret these findings as reflecting the operation of a judgment heuristic sensitive to sequential patterns and offer an account for the relationship between memory and judged frequencies of sequentially encountered stimuli.
BUTTON, LE; PETER, BEATE; STOEL-GAMMON, CAROL; RASKIND, WENDY H.
2013-01-01
The purpose of this study was to address the hypothesis that childhood apraxia of speech (CAS) is influenced by an underlying deficit in sequential processing that is also expressed in other modalities. In a sample of 21 adults from five multigenerational families, 11 with histories of various familial speech sound disorders, 3 biologically related adults from a family with familial CAS showed motor sequencing deficits in an alternating motor speech task. Compared with the other adults, these three participants showed deficits in tasks requiring high loads of sequential processing, including nonword imitation, nonword reading and spelling. Qualitative error analyses in real word and nonword imitations revealed group differences in phoneme sequencing errors. Motor sequencing ability was correlated with phoneme sequencing errors during real word and nonword imitation, reading and spelling. Correlations were characterized by extremely high scores in one family and extremely low scores in another. Results are consistent with a central deficit in sequential processing in CAS of familial origin. PMID:23339292
The Domino Way to Heterocycles
Padwa, Albert; Bur, Scott K.
2007-01-01
Sequential transformations enable the facile synthesis of complex target molecules from simple building blocks in a single preparative step. Their value is amplified if they also create multiple stereogenic centers. In the ongoing search for new domino processes, emphasis is usually placed on sequential reactions which occur cleanly and without forming by-products. As a prerequisite for an ideally proceeding one-pot sequential transformation, the reactivity pattern of all participating components has to be such that each building block gets involved in a reaction only when it is supposed to do so. The development of sequences that combine transformations of fundamentally different mechanisms broadens the scope of such procedures in synthetic chemistry. This mini review contains a representative sampling from the last 15 years on the kinds of reactions that have been sequenced into cascades to produce heterocyclic molecules. PMID:17940591
Rapid Decisions From Experience
Zeigenfuse, Matthew D.; Pleskac, Timothy J.; Liu, Taosheng
2014-01-01
In many everyday decisions, people quickly integrate noisy samples of information to form a preference among alternatives that offer uncertain rewards. Here, we investigated this decision process using the Flash Gambling Task (FGT), in which participants made a series of choices between a certain payoff and an uncertain alternative that produced a normal distribution of payoffs. For each choice, participants experienced the distribution of payoffs via rapid samples updated every 50 ms. We show that people can make these rapid decisions from experience and that the decision process is consistent with a sequential sampling process. Results also reveal a dissociation between these preferential decisions and equivalent perceptual decisions where participants had to determine which alternatives contained more dots on average. To account for this dissociation, we developed a sequential sampling rank-dependent utility model, which showed that participants in the FGT attended more to larger potential payoffs than participants in the perceptual task despite being given equivalent information. We discuss the implications of these findings in terms of computational models of preferential choice and a more complete understanding of experience-based decision making. PMID:24549141
Co-expression of HoxA9 and bcr-abl genes in chronic myeloid leukemia.
Tedeschi, Fabián A; Cardozo, Maria A; Valentini, Rosanna; Zalazar, Fabián E
2010-05-01
We have analyzed the co-expression of the bcr-abl and HoxA9 genes in the follow-up of patients with chronic myeloid leukemia (CML). In the present work we measured the HoxA9 and bcr-abl gene expression in sequential samples. In all patients, bcr-abl and HoxA9 were expressed at detectable levels in every sample. When the results were expressed in relation to abl, two different situations were found: (a) patients clinically stable at second sampling, with low relative risk at diagnosis (low Sokal's score), did not show significant differences in both bcr-abl and HoxA9 levels in the sequential samples analyzed, and (b) patients with poor prognosis (showing intermediate or high Sokal's score at diagnosis) had increased expression of bcr-abl as well as HoxA9 genes (p < 0.05). Since HoxA9 gene expression remains at relatively constant levels throughout adult life, our results could reflect actual changes in the expression rate of this gene associated with bcr-abl during the progression of CML.
NASA Astrophysics Data System (ADS)
Pragourpun, Kraivinee; Sakee, Uthai; Fernandez, Carlos; Kruanetr, Senee
2015-05-01
We present for the first time the use of deferiprone as a non-toxic complexing agent for the determination of iron by sequential injection analysis in pharmaceuticals and food samples. The method was based on the reaction of Fe(III) and deferiprone in phosphate buffer at pH 7.5 to give a Fe(III)-deferiprone complex, which showed a maximum absorption at 460 nm. Under the optimum conditions, the linearity range for iron determination was found over the range of 0.05-3.0 μg mL-1 with a correlation coefficient (r2) of 0.9993. The limit of detection and limit of quantitation were 0.032 μg mL-1 and 0.055 μg mL-1, respectively. The relative standard deviation (%RSD) of the method was less than 5.0% (n = 11), and the percentage recovery was found in the range of 96.0-104.0%. The proposed method was satisfactorily applied for the determination of Fe(III) in pharmaceuticals, water and food samples with a sampling rate of 60 h-1.
Modeling number of claims and prediction of total claim amount
NASA Astrophysics Data System (ADS)
Acar, Aslıhan Şentürk; Karabey, Uǧur
2017-07-01
In this study we focus on annual number of claims of a private health insurance data set which belongs to a local insurance company in Turkey. In addition to Poisson model and negative binomial model, zero-inflated Poisson model and zero-inflated negative binomial model are used to model the number of claims in order to take into account excess zeros. To investigate the impact of different distributional assumptions for the number of claims on the prediction of total claim amount, predictive performances of candidate models are compared by using root mean square error (RMSE) and mean absolute error (MAE) criteria.
Binomial tree method for pricing a regime-switching volatility stock loans
NASA Astrophysics Data System (ADS)
Putri, Endah R. M.; Zamani, Muhammad S.; Utomo, Daryono B.
2018-03-01
Binomial model with regime switching may represents the price of stock loan which follows the stochastic process. Stock loan is one of alternative that appeal investors to get the liquidity without selling the stock. The stock loan mechanism resembles that of American call option when someone can exercise any time during the contract period. From the resembles both of mechanism, determination price of stock loan can be interpreted from the model of American call option. The simulation result shows the behavior of the price of stock loan under a regime-switching with respect to various interest rate and maturity.
Umbral Calculus and Holonomic Modules in Positive Characteristic
NASA Astrophysics Data System (ADS)
Kochubei, Anatoly N.
2006-03-01
In the framework of analysis over local fields of positive characteristic, we develop algebraic tools for introducing and investigating various polynomial systems. In this survey paper we describe a function field version of umbral calculus developed on the basis of a relation of binomial type satisfied by the Carlitz polynomials. We consider modules over the Weyl-Carlitz ring, a function field counterpart of the Weyl algebra. It is shown that some basic objects of function field arithmetic, like the Carlitz module, Thakur's hypergeometric polynomials, and analogs of binomial coefficients arising in the positive characteristic version of umbral calculus, generate holonomic modules.
Modelling parasite aggregation: disentangling statistical and ecological approaches.
Yakob, Laith; Soares Magalhães, Ricardo J; Gray, Darren J; Milinovich, Gabriel; Wardrop, Nicola; Dunning, Rebecca; Barendregt, Jan; Bieri, Franziska; Williams, Gail M; Clements, Archie C A
2014-05-01
The overdispersion in macroparasite infection intensity among host populations is commonly simulated using a constant negative binomial aggregation parameter. We describe an alternative to utilising the negative binomial approach and demonstrate important disparities in intervention efficacy projections that can come about from opting for pattern-fitting models that are not process-explicit. We present model output in the context of the epidemiology and control of soil-transmitted helminths due to the significant public health burden imposed by these parasites, but our methods are applicable to other infections with demonstrable aggregation in parasite numbers among hosts. Copyright © 2014. Published by Elsevier Ltd.
FluBreaks: early epidemic detection from Google flu trends.
Pervaiz, Fahad; Pervaiz, Mansoor; Abdur Rehman, Nabeel; Saif, Umar
2012-10-04
The Google Flu Trends service was launched in 2008 to track changes in the volume of online search queries related to flu-like symptoms. Over the last few years, the trend data produced by this service has shown a consistent relationship with the actual number of flu reports collected by the US Centers for Disease Control and Prevention (CDC), often identifying increases in flu cases weeks in advance of CDC records. However, contrary to popular belief, Google Flu Trends is not an early epidemic detection system. Instead, it is designed as a baseline indicator of the trend, or changes, in the number of disease cases. To evaluate whether these trends can be used as a basis for an early warning system for epidemics. We present the first detailed algorithmic analysis of how Google Flu Trends can be used as a basis for building a fully automated system for early warning of epidemics in advance of methods used by the CDC. Based on our work, we present a novel early epidemic detection system, called FluBreaks (dritte.org/flubreaks), based on Google Flu Trends data. We compared the accuracy and practicality of three types of algorithms: normal distribution algorithms, Poisson distribution algorithms, and negative binomial distribution algorithms. We explored the relative merits of these methods, and related our findings to changes in Internet penetration and population size for the regions in Google Flu Trends providing data. Across our performance metrics of percentage true-positives (RTP), percentage false-positives (RFP), percentage overlap (OT), and percentage early alarms (EA), Poisson- and negative binomial-based algorithms performed better in all except RFP. Poisson-based algorithms had average values of 99%, 28%, 71%, and 76% for RTP, RFP, OT, and EA, respectively, whereas negative binomial-based algorithms had average values of 97.8%, 17.8%, 60%, and 55% for RTP, RFP, OT, and EA, respectively. Moreover, the EA was also affected by the region's population size. Regions with larger populations (regions 4 and 6) had higher values of EA than region 10 (which had the smallest population) for negative binomial- and Poisson-based algorithms. The difference was 12.5% and 13.5% on average in negative binomial- and Poisson-based algorithms, respectively. We present the first detailed comparative analysis of popular early epidemic detection algorithms on Google Flu Trends data. We note that realizing this opportunity requires moving beyond the cumulative sum and historical limits method-based normal distribution approaches, traditionally employed by the CDC, to negative binomial- and Poisson-based algorithms to deal with potentially noisy search query data from regions with varying population and Internet penetrations. Based on our work, we have developed FluBreaks, an early warning system for flu epidemics using Google Flu Trends.
Verification of hypergraph states
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Takeuchi, Yuki; Hayashi, Masahito
2017-12-01
Hypergraph states are generalizations of graph states where controlled-Z gates on edges are replaced with generalized controlled-Z gates on hyperedges. Hypergraph states have several advantages over graph states. For example, certain hypergraph states, such as the Union Jack states, are universal resource states for measurement-based quantum computing with only Pauli measurements, while graph state measurement-based quantum computing needs non-Clifford basis measurements. Furthermore, it is impossible to classically efficiently sample measurement results on hypergraph states unless the polynomial hierarchy collapses to the third level. Although several protocols have been proposed to verify graph states with only sequential single-qubit Pauli measurements, there was no verification method for hypergraph states. In this paper, we propose a method for verifying a certain class of hypergraph states with only sequential single-qubit Pauli measurements. Importantly, no i.i.d. property of samples is assumed in our protocol: any artificial entanglement among samples cannot fool the verifier. As applications of our protocol, we consider verified blind quantum computing with hypergraph states, and quantum computational supremacy demonstrations with hypergraph states.
de Oliveira, Fabio Santos; Korn, Mauro
2006-01-15
A sensitive SIA method was developed for sulphate determination in automotive fuel ethanol. This method was based on the reaction of sulphate with barium-dimethylsulphonazo(III) leading to a decrease on the magnitude of analytical signal monitored at 665 nm. Alcohol fuel samples were previously burned up to avoid matrix effects for sulphate determinations. Binary sampling and stop-flow strategies were used to increase the sensitivity of the method. The optimization of analytical parameter was performed by response surface method using Box-Behnker and central composite designs. The proposed sequential flow procedure permits to determine up to 10.0mg SO(4)(2-)l(-1) with R.S.D. <2.5% and limit of detection of 0.27 mg l(-1). The method has been successfully applied for sulphate determination in automotive fuel alcohol and the results agreed with the reference volumetric method. In the optimized condition the SIA system carried out 27 samples per hour.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alonso, E.; Aparicio, I.; Santos, J.L.
2009-01-15
The content of heavy metals is the major limitation to the application of sewage sludge in soil. However, assessment of the pollution by total metal determination does not reveal the true environmental impact. It is necessary to apply sequential extraction techniques to obtain suitable information about their bioavailability or toxicity. In this paper, sequential extraction of metals from sludge before and after aerobic digestion was applied to sludge from five WWTPs in southern Spain to obtain information about the influence of the digestion treatment in the concentration of the metals. The percentage of each metal as residual, oxidizable, reducible andmore » exchangeable form was calculated. For this purpose, sludge samples were collected from two different points of the plants, namely, sludge from the mixture (primary and secondary sludge) tank (mixed sludge, MS) and the digested-dewatered sludge (final sludge, FS). Heavy metals, Al, Cd, Co, Cr, Cu, Fe, Hg, Mn, Mo, Ni, Pb, Ti and Zn, were extracted following the sequential extraction scheme proposed by the Standards, Measurements and Testing Programme of the European Commission and determined by inductively-coupled plasma atomic emission spectrometry. The total concentration of heavy metals in the measured sludge samples did not exceed the limits set out by European legislation and were mainly associated with the two less-available fractions (27-28% as oxidizable metal and 44-50% as residual metal). However, metals as Co (64% in MS and 52% in FS samples), Mn (82% in MS and 79% in FS), Ni (32% in MS and 26% in FS) and Zn (79% in MS and 62% in FS) were present at important percentages as available forms. In addition, results showed a clear increase of the concentration of metals after sludge treatment in the proportion of two less-available fractions (oxidizable and residual metal)« less
Alonso, E; Aparicio, I; Santos, J L; Villar, P; Santos, A
2009-01-01
The content of heavy metals is the major limitation to the application of sewage sludge in soil. However, assessment of the pollution by total metal determination does not reveal the true environmental impact. It is necessary to apply sequential extraction techniques to obtain suitable information about their bioavailability or toxicity. In this paper, sequential extraction of metals from sludge before and after aerobic digestion was applied to sludge from five WWTPs in southern Spain to obtain information about the influence of the digestion treatment in the concentration of the metals. The percentage of each metal as residual, oxidizable, reducible and exchangeable form was calculated. For this purpose, sludge samples were collected from two different points of the plants, namely, sludge from the mixture (primary and secondary sludge) tank (mixed sludge, MS) and the digested-dewatered sludge (final sludge, FS). Heavy metals, Al, Cd, Co, Cr, Cu, Fe, Hg, Mn, Mo, Ni, Pb, Ti and Zn, were extracted following the sequential extraction scheme proposed by the Standards, Measurements and Testing Programme of the European Commission and determined by inductively-coupled plasma atomic emission spectrometry. The total concentration of heavy metals in the measured sludge samples did not exceed the limits set out by European legislation and were mainly associated with the two less-available fractions (27-28% as oxidizable metal and 44-50% as residual metal). However, metals as Co (64% in MS and 52% in FS samples), Mn (82% in MS and 79% in FS), Ni (32% in MS and 26% in FS) and Zn (79% in MS and 62% in FS) were present at important percentages as available forms. In addition, results showed a clear increase of the concentration of metals after sludge treatment in the proportion of two less-available fractions (oxidizable and residual metal).
Assessment of DSM-5 Section II Personality Disorders With the MMPI-2-RF in a Nonclinical Sample.
Sellbom, Martin; Smith, Alexander
2017-01-01
The Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008 / 2011 ) is frequently used in clinical practice. However, there has been a dearth of literature on how well this instrument can assess symptoms associated with personality disorders (PDs). This investigation examined a range of hypothesized MMPI-2-RF scales in predicting PD symptoms. We evaluated these associations in a sample of 397 university students who had been administered the MMPI-2-RF and the Structured Clinical Interview for DSM-IV Axis II Disorders-Personality Questionnaire (First, Gibbon, Spitzer, Williams, & Benjamin, 1997 ). Zero-order correlation analyses and negative binomial regression models indicated that a wide range of MMPI-2-RF scale hypotheses were supported; however, the least support was available for predicting schizoid and obsessive-compulsive PDs. Implications for MMPI-2-RF interpretation and PD diagnosis are discussed.
[Parenting styles and their relationship with hyperactivity].
Raya Trenas, Antonio Félix; Herreruzo Cabrera, Javier; Pino Osuna, María José
2008-11-01
The present study aims to determine the relationship among factors that make up the parenting styles according to the PCRI (Parent-Child Relationship Inventory) and hyperactivity reported by parents through the BASC (Behaviour Assessment System for Children). We selected a sample of 32 children between 3 and 14 years old (23 male and 9 female) with risk scores in hyperactivity and another similar group with low scores in hyperactivity. After administering both instruments to the parents, we carried out a binomial logistic regression analysis which resulted in a prediction model for 84.4% of the sample, made up of the PCRI factors: fathers' involvement, communication and role orientation, mothers' parental support, and both parents' limit-setting and autonomy. Moreover, our analysis of the variance produced significant differences in the support perceived by the fathers and mothers of both groups. Lastly, the utility of results to propose intervention strategies within the family based on an authoritative style is discussed.
Gastrointestinal parasite egg excretion in young calves in periurban livestock production in Mali.
Wymann, Monica Natalie; Traore, Koniba; Bonfoh, Bassirou; Tembely, Saïdou; Tembely, Sékouba; Zinsstag, Jakob
2008-04-01
To acquire the information needed to improve parasite control in periurban cattle production in Mali, repeated sampling of faeces of 694 calves kept around Bamako was done in 2003/2004. The effects of season, age, breed, management type, parasite control and presence of sheep on egg and oocyst counts were determined. A Bayesian model was used with a negative binomial distribution and herd and individual effects, to account for the clustering of calves in herds and the repeated sampling. Interviews were conducted to report the current control strategies. We found eggs of Strongyloides papillosus (Age class 0-1 month: prevalence 39%, 2-3 months: 59%, 5-6 months: 42%), strongyles (14%, 24%, 36%), coccidian oocysts (37%, 68%, 64%) and at low prevalence eggs of Toxocara vitulorum, Moniezia sp., Trichuris sp. and Paramphistomum sp. Season and age effects occurred. Reported utilisation of parasite control was high (92%) but monthly recorded use was significantly lower (61%).
A Nationwide Study of Discrimination and Chronic Health Conditions Among Asian Americans
Gee, Gilbert C.; Spencer, Michael S.; Chen, Juan; Takeuchi, David
2007-01-01
Objectives. We examined whether self-reported everyday discrimination was associated with chronic health conditions among a nationally representative sample of Asian Americans. Methods. Data were from the Asian American subsample (n = 2095) of the National Latino and Asian American Study conducted in 2002 and 2003. Regression techniques (negative binomial and logistic) were used to examine the association between discrimination and chronic health conditions. Analyses were conducted for the entire sample and 3 Asian subgroups (Chinese, Vietnamese, and Filipino). Results. Reports of everyday discrimination were associated with many chronic conditions, after we controlled for age, gender, region, per capita income, education, employment, and social desirability bias. Discrimination was also associated with indicators of heart disease, pain, and respiratory illnesses. There were some differences by Asian subgroup. Conclusions. Everyday discrimination may contribute to stress experienced by racial/ethnic minorities and could lead to chronic illness. PMID:17538055
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
2017-01-09
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
Nyman, U; Lundberg, I; Hedfors, E; Wahren, M; Pettersson, I
1992-01-01
Sequentially obtained serum samples from 30 patients with connective tissue disease positive for antibody to ribonucleoprotein (RNP) were examined to determine the specificities of IgG and IgM antibodies to snRNP during the disease course using immunoblotting of nuclear extracts. The antibody patterns were correlated with disease activity. The patterns of antibody to snRNP of individual patients were mainly stable during the study but changes in levels of antibody to snRNP were seen corresponding to changes in clinical activity. These results indicate that increased reactivity of serum IgM antibodies against the B/B' proteins seems to precede a clinically evident exacerbation of disease whereas IgG antibody reactivity to the 70 K protein peaks at the time of a disease flare. Images PMID:1485812
Alhusban, Ala A; Gaudry, Adam J; Breadmore, Michael C; Gueven, Nuri; Guijt, Rosanne M
2014-01-03
Cell culture has replaced many in vivo studies because of ethical and regulatory measures as well as the possibility of increased throughput. Analytical assays to determine (bio)chemical changes are often based on end-point measurements rather than on a series of sequential determinations. The purpose of this work is to develop an analytical system for monitoring cell culture based on sequential injection-capillary electrophoresis (SI-CE) with capacitively coupled contactless conductivity detection (C(4)D). The system was applied for monitoring lactate production, an important metabolic indicator, during mammalian cell culture. Using a background electrolyte consisting of 25mM tris(hydroxymethyl)aminomethane, 35mM cyclohexyl-2-aminoethanesulfonic acid with 0.02% poly(ethyleneimine) (PEI) at pH 8.65 and a multilayer polymer coated capillary, lactate could be resolved from other compounds present in media with relative standard deviations 0.07% for intraday electrophoretic mobility and an analysis time of less than 10min. Using the human embryonic kidney cell line HEK293, lactate concentrations in the cell culture medium were measured every 20min over 3 days, requiring only 8.73μL of sample per run. Combining simplicity, portability, automation, high sample throughput, low limits of detection, low sample consumption and the ability to up- and outscale, this new methodology represents a promising technique for near real-time monitoring of chemical changes in diverse cell culture applications. Copyright © 2013 Elsevier B.V. All rights reserved.
Khongpet, Wanpen; Pencharee, Somkid; Puangpila, Chanida; Kradtap Hartwell, Supaporn; Lapanantnoppakhun, Somchai; Jakmunee, Jaroon
2018-01-15
A microfluidic hydrodynamic sequential injection (μHSI) spectrophotometric system was designed and fabricated. The system was built by laser engraving a manifold pattern on an acrylic block and sealing with another flat acrylic plate to form a microfluidic channel platform. The platform was incorporated with small solenoid valves to obtain a portable setup for programmable control of the liquid flow into the channel according to the HSI principle. The system was demonstrated for the determination of phosphate using a molybdenum blue method. An ascorbic acid, standard or sample, and acidic molybdate solutions were sequentially aspirated to fill the channel forming a stack zone before flowing to the detector. Under the optimum condition, a linear calibration graph in the range of 0.1-6mg P L -1 was obtained. The detection limit was 0.1mgL -1 . The system is compact (5.0mm thick, 80mm wide × 140mm long), durable, portable, cost-effective, and consumes little amount of chemicals (83μL each of molybdate and ascorbic acid, 133μL of the sample solution and 1.7mL of water carrier/run). It was applied for the determination of phosphate content in extracted soil samples. The percent recoveries of the analysis were obtained in the range of 91.2-107.3. The results obtained agreed well with those of the batch spectrophotometric method. Copyright © 2017 Elsevier B.V. All rights reserved.
Void probability as a function of the void's shape and scale-invariant models
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1991-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Adams, Rachel Sayko; Larson, Mary Jo; Corrigan, John D.; Ritter, Grant A.; Williams, Thomas V.
2013-01-01
This study used the 2008 Department of Defense Survey of Health Related Behaviors among Active Duty Military Personnel to determine whether traumatic brain injury (TBI) is associated with past year drinking-related consequences. The study sample included currently-drinking personnel who had a combat deployment in the past year and were home for ≥6 months (N = 3,350). Negative binomial regression models were used to assess the incidence rate ratios of consequences, by TBI-level. Experiencing a TBI with a loss of consciousness >20 minutes was significantly associated with consequences independent of demographics, combat exposure, posttraumatic stress disorder, and binge drinking. The study’s limitations are noted. PMID:23869456
ERIC Educational Resources Information Center
Hofer, Scott M.; Flaherty, Brian P.; Hoffman, Lesa
2006-01-01
The effect of time-related mean differences on estimates of association in cross-sectional studies has not been widely recognized in developmental and aging research. Cross-sectional studies of samples varying in age have found moderate to high levels of shared age-related variance among diverse age-related measures. These findings may be…
Ng, Ding-Quan; Liu, Shu-Wei; Lin, Yi-Pin
2018-09-15
In this study, a sampling campaign with a total of nine sampling events investigating lead in drinking water was conducted at 7 sampling locations in an old building with lead pipes in service in part of the building on the National Taiwan University campus. This study aims to assess the effectiveness of four different sampling methods, namely first draw sampling, sequential sampling, random daytime sampling and flush sampling, in lead contamination detection. In 3 out of the 7 sampling locations without lead pipe, lead could not be detected (<1.1 μg/L) in most samples regardless of the sampling methods. On the other hand, in the 4 sampling locations where lead pipes still existed, total lead concentrations >10 μg/L were consistently observed in 3 locations using any of the four sampling methods while the remaining location was identified to be contaminated using sequential sampling. High lead levels were consistently measured by the four sampling methods in the 3 locations in which particulate lead was either predominant or comparable to soluble lead. Compared to first draw and random daytime samplings, although flush sampling had a high tendency to reduce total lead in samples in lead-contaminated sites, the extent of lead reduction was location-dependent and not dependent on flush durations between 5 and 10 min. Overall, first draw sampling and random daytime sampling were reliable and effective in determining lead contamination in this study. Flush sampling could reveal the contamination if the extent is severe but tends to underestimate lead exposure risk. Copyright © 2018 Elsevier B.V. All rights reserved.
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
An examination of sources of sensitivity of consumer surplus estimates in travel cost models.
Blaine, Thomas W; Lichtkoppler, Frank R; Bader, Timothy J; Hartman, Travis J; Lucente, Joseph E
2015-03-15
We examine sensitivity of estimates of recreation demand using the Travel Cost Method (TCM) to four factors. Three of the four have been routinely and widely discussed in the TCM literature: a) Poisson verses negative binomial regression; b) application of Englin correction to account for endogenous stratification; c) truncation of the data set to eliminate outliers. A fourth issue we address has not been widely modeled: the potential effect on recreation demand of the interaction between income and travel cost. We provide a straightforward comparison of all four factors, analyzing the impact of each on regression parameters and consumer surplus estimates. Truncation has a modest effect on estimates obtained from the Poisson models but a radical effect on the estimates obtained by way of the negative binomial. Inclusion of an income-travel cost interaction term generally produces a more conservative but not a statistically significantly different estimate of consumer surplus in both Poisson and negative binomial models. It also generates broader confidence intervals. Application of truncation, the Englin correction and the income-travel cost interaction produced the most conservative estimates of consumer surplus and eliminated the statistical difference between the Poisson and the negative binomial. Use of the income-travel cost interaction term reveals that for visitors who face relatively low travel costs, the relationship between income and travel demand is negative, while it is positive for those who face high travel costs. This provides an explanation of the ambiguities on the findings regarding the role of income widely observed in the TCM literature. Our results suggest that policies that reduce access to publicly owned resources inordinately impact local low income recreationists and are contrary to environmental justice. Copyright © 2014 Elsevier Ltd. All rights reserved.
A big data approach to the development of mixed-effects models for seizure count data.
Tharayil, Joseph J; Chiang, Sharon; Moss, Robert; Stern, John M; Theodore, William H; Goldenholz, Daniel M
2017-05-01
Our objective was to develop a generalized linear mixed model for predicting seizure count that is useful in the design and analysis of clinical trials. This model also may benefit the design and interpretation of seizure-recording paradigms. Most existing seizure count models do not include children, and there is currently no consensus regarding the most suitable model that can be applied to children and adults. Therefore, an additional objective was to develop a model that accounts for both adult and pediatric epilepsy. Using data from SeizureTracker.com, a patient-reported seizure diary tool with >1.2 million recorded seizures across 8 years, we evaluated the appropriateness of Poisson, negative binomial, zero-inflated negative binomial, and modified negative binomial models for seizure count data based on minimization of the Bayesian information criterion. Generalized linear mixed-effects models were used to account for demographic and etiologic covariates and for autocorrelation structure. Holdout cross-validation was used to evaluate predictive accuracy in simulating seizure frequencies. For both adults and children, we found that a negative binomial model with autocorrelation over 1 day was optimal. Using holdout cross-validation, the proposed model was found to provide accurate simulation of seizure counts for patients with up to four seizures per day. The optimal model can be used to generate more realistic simulated patient data with very few input parameters. The availability of a parsimonious, realistic virtual patient model can be of great utility in simulations of phase II/III clinical trials, epilepsy monitoring units, outpatient biosensors, and mobile Health (mHealth) applications. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Silbergleit, Alice K; Cook, Diana; Kienzle, Scott; Boettcher, Erica; Myers, Daniel; Collins, Denise; Peterson, Edward; Silbergleit, Matthew A; Silbergleit, Richard
2018-04-04
Formal agreement studies on interpretation of the videofluoroscopic swallowing study (VFSS) procedure among speech-language pathologists, radiology house officers, and staff radiologists have not been pursued. Each of these professions participates in the procedure, interprets the examination, and writes separate reports on the findings. The aim of this study was to determine reliability of interpretation between and within the disciplines and to determine if structured training improved reliability. Thirteen speech-language pathologists (SLPs), ten diagnostic radiologists (RADs) and twenty-one diagnostic radiology house officers (HOs) participated in this study. Each group viewed 24 VFSS samples and rated the presence or absence of seven aberrant swallowing features as well as the presence of dysphagia and identification of oral dysphagia, pharyngeal dysphagia, or both. During part two, the groups were provided with a training session on normal and abnormal swallowing, using different VFSS samples from those in part one, followed by re-rating of the original 24 VFSS samples. A generalized estimating equations (GEE) approach with a binomial link function was used to examine each question separately. For each cluster of tests, as example, all pairwise comparisons between the three groups in the pretraining period, a Hochberg's correction for multiple testing was used to determine significance. A GEE approach with a binomial link function was used to compare the premeasure to postmeasure for each of the three groups of raters stratified by experience. The primary result revealed that the HO group scored significantly lower than the SLP and RAD group on identification of the presence of dysphagia (p = 0.008; p = 0.001, respectively), identification of oral phase dysphagia (p = 0.003; p = 0.001, respectively), and identification of both oral and pharyngeal phase dysphagia, (p = 0.014, p = 0.001, respectively) pretraining. Post training there was no statistically significant difference between the three groups on identification of dysphagia and identification of combined oral and pharyngeal dysphagia. Formal training to identify oropharyngeal dysphagia characteristics appears to improve accuracy of interpretation of the VFSS procedure for radiology house officers. Consideration to include formal training in this area for radiology residency training programs is recommended.
Evaluating Multiple Imputation Models for the Southern Annual Forest Inventory
Gregory A. Reams; Joseph M. McCollum
1999-01-01
The USDA Forest Service's Southern Research Station is implementing an annualized forest survey in thirteen states. The sample design is a systematic sample of five interpenetrating grids (panels), where each panel is measured sequentially. For example, panel one information is collected in year one, and panel five in year five. The area representative and time...
Environmental persistence of the nucleopolyhedrosis virus of the gypsy moth, Lymantria dispar L
J.D. Podgwaite; Kathleen Stone Shields; R.T. Zerillo; R.B. Bruen
1979-01-01
A bioassay technique was used to estimate the concentrations of infectious gypsy moth nucleopolyhedrosis virus (NPV) that occur naturaIly in leaf, bark, litter, and soil samples taken from woodland plots in Connecticut and Pennsylvania. These concentrations were then compared to those in samples taken sequentially after treatment of these plots with NPV. Results...
ERIC Educational Resources Information Center
Chan, Wai
2007-01-01
In social science research, an indirect effect occurs when the influence of an antecedent variable on the effect variable is mediated by an intervening variable. To compare indirect effects within a sample or across different samples, structural equation modeling (SEM) can be used if the computer program supports model fitting with nonlinear…
How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation
ERIC Educational Resources Information Center
Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard
2006-01-01
Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…
40 CFR 761.302 - Proportion of the total surface area to sample.
Code of Federal Regulations, 2011 CFR
2011-07-01
... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...
40 CFR 761.302 - Proportion of the total surface area to sample.
Code of Federal Regulations, 2014 CFR
2014-07-01
... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...
40 CFR 761.302 - Proportion of the total surface area to sample.
Code of Federal Regulations, 2010 CFR
2010-07-01
... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...
40 CFR 761.302 - Proportion of the total surface area to sample.
Code of Federal Regulations, 2012 CFR
2012-07-01
... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...
40 CFR 761.302 - Proportion of the total surface area to sample.
Code of Federal Regulations, 2013 CFR
2013-07-01
... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...
Teachers' Adoptation Level of Student Centered Education Approach
ERIC Educational Resources Information Center
Arseven, Zeynep; Sahin, Seyma; Kiliç, Abdurrahman
2016-01-01
The aim of this study is to identify how far the student centered education approach is applied in the primary, middle and high schools in Düzce. Explanatory design which is one type of mixed research methods and "sequential mixed methods sampling" were used in the study. 685 teachers constitute the research sample of the quantitative…
The Cerebellar Deficit Hypothesis and Dyslexic Tendencies in a Non-Clinical Sample
ERIC Educational Resources Information Center
Brookes, Rebecca L.; Stirling, John
2005-01-01
In order to assess the relationship between cerebellar deficits and dyslexic tendencies in a non-clinical sample, 27 primary school children aged 8-9 completed a cerebellar soft signs battery and were additionally assessed for reading age, sequential memory, picture arrangement and knowledge of common sequences. An average measure of the soft…
Di, Yanming; Schafer, Daniel W.; Wilhelm, Larry J.; Fox, Samuel E.; Sullivan, Christopher M.; Curzon, Aron D.; Carrington, James C.; Mockler, Todd C.; Chang, Jeff H.
2011-01-01
GENE-counter is a complete Perl-based computational pipeline for analyzing RNA-Sequencing (RNA-Seq) data for differential gene expression. In addition to its use in studying transcriptomes of eukaryotic model organisms, GENE-counter is applicable for prokaryotes and non-model organisms without an available genome reference sequence. For alignments, GENE-counter is configured for CASHX, Bowtie, and BWA, but an end user can use any Sequence Alignment/Map (SAM)-compliant program of preference. To analyze data for differential gene expression, GENE-counter can be run with any one of three statistics packages that are based on variations of the negative binomial distribution. The default method is a new and simple statistical test we developed based on an over-parameterized version of the negative binomial distribution. GENE-counter also includes three different methods for assessing differentially expressed features for enriched gene ontology (GO) terms. Results are transparent and data are systematically stored in a MySQL relational database to facilitate additional analyses as well as quality assessment. We used next generation sequencing to generate a small-scale RNA-Seq dataset derived from the heavily studied defense response of Arabidopsis thaliana and used GENE-counter to process the data. Collectively, the support from analysis of microarrays as well as the observed and substantial overlap in results from each of the three statistics packages demonstrates that GENE-counter is well suited for handling the unique characteristics of small sample sizes and high variability in gene counts. PMID:21998647
Matong, Joseph M; Nyaba, Luthando; Nomngongo, Philiswa N
2016-07-01
The main objectives of this study were to determine the concentration of fourteen trace elements and to investigate their distribution as well as a contamination levels in selected agricultural soils. An ultrasonic assisted sequential extraction procedure derived from three-step BCR method was used for fractionation of trace elements. The total concentration of trace elements in soil samples was obtained by total digestion method in soil samples with aqua regia. The results of the extractable fractions revealed that most of the target trace elements can be transferred to the human being through the food chain, thus leading to serious human health. Enrichment factor (EF), geo-accumulation index (Igeo), contamination factor (CF), risk assessment code (RAC) and individual contamination factors (ICF) were used to assess the environmental impacts of trace metals in soil samples. The EF revealed that Cd was enriched by 3.1-7.2 (except in Soil 1). The Igeo results showed that the soils in the study area was moderately contaminated with Fe, and heavily to extremely polluted with Cd. The soil samples from the unplanted field was found to have highest contamination factor for Cd and lowest for Pb. Soil 3 showed a high risk for Tl and Cd with RAC values of greater than or equal to 50%. In addition, Fe, Ni, Cu, V, As, Mo (except Soil 2), Sb and Pb posed low environmental risk. The modified BCR sequential extraction method provided more information about mobility and environmental implication of studied trace elements in the study area. Copyright © 2016 Elsevier Ltd. All rights reserved.
Davletbaeva, Polina; Chocholouš, Petr; Bulatov, Andrey; Šatínský, Dalibor; Solich, Petr
2017-09-05
Sequential Injection Chromatography (SIC) evolved from fast and automated non-separation Sequential Injection Analysis (SIA) into chromatographic separation method for multi-element analysis. However, the speed of the measurement (sample throughput) is due to chromatography significantly reduced. In this paper, a sub-1min separation using medium polar cyano monolithic column (5mm×4.6mm) resulted in fast and green separation with sample throughput comparable with non-separation flow methods The separation of three synthetic water-soluble dyes (sunset yellow FCF, carmoisine and green S) was in a gradient elution mode (0.02% ammonium acetate, pH 6.7 - water) with flow rate of 3.0mLmin -1 corresponding with sample throughput of 30h -1 . Spectrophotometric detection wavelengths were set to 480, 516 and 630nm and 10Hz data collection rate. The performance of the separation was described and discussed (peak capacities 3.48-7.67, peak symmetries 1.72-1.84 and resolutions 1.42-1.88). The method was represented by validation parameters: LODs of 0.15-0.35mgL -1 , LOQs of 0.50-1.25mgL -1 , calibration ranges 0.50-150.00mgL -1 (r>0.998) and repeatability at 10.0mgL -1 of RSD≤0.98% (n=6). The method was used for determination of the dyes in "forest berries" colored pharmaceutical cough-cold formulation. The sample matrix - pharmaceuticals and excipients were not interfering with vis determination because of no retention in the separation column and colorless nature. The results proved the concept of fast and green chromatography approach using very short medium polar monolithic column in SIC. Copyright © 2017 Elsevier B.V. All rights reserved.
Meyer, Marjolaine D; Terry, Leon A
2008-08-27
Methods devised for oil extraction from avocado (Persea americana Mill.) mesocarp (e.g., Soxhlet) are usually lengthy and require operation at high temperature. Moreover, methods for extracting sugars from avocado tissue (e.g., 80% ethanol, v/v) do not allow for lipids to be easily measured from the same sample. This study describes a new simple method that enabled sequential extraction and subsequent quantification of both fatty acids and sugars from the same avocado mesocarp tissue sample. Freeze-dried mesocarp samples of avocado cv. Hass fruit of different ripening stages were extracted by homogenization with hexane and the oil extracts quantified for fatty acid composition by GC. The resulting filter residues were readily usable for sugar extraction with methanol (62.5%, v/v). For comparison, oil was also extracted using the standard Soxhlet technique and the resulting thimble residue extracted for sugars as before. An additional experiment was carried out whereby filter residues were also extracted using ethanol. Average oil yield using the Soxhlet technique was significantly (P < 0.05) higher than that obtained by homogenization with hexane, although the difference remained very slight, and fatty acid profiles of the oil extracts following both methods were very similar. Oil recovery improved with increasing ripeness of the fruit with minor differences observed in the fatty acid composition during postharvest ripening. After lipid removal, methanolic extraction was superior in recovering sucrose and perseitol as compared to 80% ethanol (v/v), whereas mannoheptulose recovery was not affected by solvent used. The method presented has the benefits of shorter extraction time, lower extraction temperature, and reduced amount of solvent and can be used for sequential extraction of fatty acids and sugars from the same sample.
Sampling Error in Relation to Cyst Nematode Population Density Estimation in Small Field Plots.
Župunski, Vesna; Jevtić, Radivoje; Jokić, Vesna Spasić; Župunski, Ljubica; Lalošević, Mirjana; Ćirić, Mihajlo; Ćurčić, Živko
2017-06-01
Cyst nematodes are serious plant-parasitic pests which could cause severe yield losses and extensive damage. Since there is still very little information about error of population density estimation in small field plots, this study contributes to the broad issue of population density assessment. It was shown that there was no significant difference between cyst counts of five or seven bulk samples taken per each 1-m 2 plot, if average cyst count per examined plot exceeds 75 cysts per 100 g of soil. Goodness of fit of data to probability distribution tested with χ 2 test confirmed a negative binomial distribution of cyst counts for 21 out of 23 plots. The recommended measure of sampling precision of 17% expressed through coefficient of variation ( cv ) was achieved if the plots of 1 m 2 contaminated with more than 90 cysts per 100 g of soil were sampled with 10-core bulk samples taken in five repetitions. If plots were contaminated with less than 75 cysts per 100 g of soil, 10-core bulk samples taken in seven repetitions gave cv higher than 23%. This study indicates that more attention should be paid on estimation of sampling error in experimental field plots to ensure more reliable estimation of population density of cyst nematodes.
Particle filters, a quasi-Monte-Carlo-solution for segmentation of coronaries.
Florin, Charles; Paragios, Nikos; Williams, Jim
2005-01-01
In this paper we propose a Particle Filter-based approach for the segmentation of coronary arteries. To this end, successive planes of the vessel are modeled as unknown states of a sequential process. Such states consist of the orientation, position, shape model and appearance (in statistical terms) of the vessel that are recovered in an incremental fashion, using a sequential Bayesian filter (Particle Filter). In order to account for bifurcations and branchings, we consider a Monte Carlo sampling rule that propagates in parallel multiple hypotheses. Promising results on the segmentation of coronary arteries demonstrate the potential of the proposed approach.
Paoletti, Claudia; Esbensen, Kim H
2015-01-01
Material heterogeneity influences the effectiveness of sampling procedures. Most sampling guidelines used for assessment of food and/or feed commodities are based on classical statistical distribution requirements, the normal, binomial, and Poisson distributions-and almost universally rely on the assumption of randomness. However, this is unrealistic. The scientific food and feed community recognizes a strong preponderance of non random distribution within commodity lots, which should be a more realistic prerequisite for definition of effective sampling protocols. Nevertheless, these heterogeneity issues are overlooked as the prime focus is often placed only on financial, time, equipment, and personnel constraints instead of mandating acquisition of documented representative samples under realistic heterogeneity conditions. This study shows how the principles promulgated in the Theory of Sampling (TOS) and practically tested over 60 years provide an effective framework for dealing with the complete set of adverse aspects of both compositional and distributional heterogeneity (material sampling errors), as well as with the errors incurred by the sampling process itself. The results of an empirical European Union study on genetically modified soybean heterogeneity, Kernel Lot Distribution Assessment are summarized, as they have a strong bearing on the issue of proper sampling protocol development. TOS principles apply universally in the food and feed realm and must therefore be considered the only basis for development of valid sampling protocols free from distributional constraints.
Liu, Yang; Luo, Zhi-Qiang; Lv, Bei-Ran; Zhao, Hai-Yu; Dong, Ling
2016-04-01
The multiple components in Chinese herbal medicines (CHMS) will experience complex absorption and metabolism before entering the blood system. Previous studies often lay emphasis on the components in blood. However, the dynamic and sequential absorption and metabolism process following multi-component oral administration has not been studied. In this study, the in situ closed-loop method combined with LC-MS techniques were employed to study the sequential process of Chuanxiong Rhizoma decoction (RCD). A total of 14 major components were identified in RCD. Among them, ferulic acid, senkyunolide J, senkyunolide I, senkyunolide F, senkyunolide G, and butylidenephthalide were detected in all of the samples, indicating that the six components could be absorbed into blood in prototype. Butylphthalide, E-ligustilide, Z-ligustilide, cnidilide, senkyunolide A and senkyunolide Q were not detected in all the samples, suggesting that the six components may not be absorbed or metabolized before entering the hepatic portal vein. Senkyunolide H could be metabolized by the liver, while senkyunolide M could be metabolized by both liver and intestinal flora. This study clearly demonstrated the changes in the absorption and metabolism process following multi-component oral administration of RCD, so as to convert the static multi-component absorption process into a comprehensive dynamic and continuous absorption and metabolism process. Copyright© by the Chinese Pharmaceutical Association.
Simultaneous capture and sequential detection of two malarial biomarkers on magnetic microparticles.
Markwalter, Christine F; Ricks, Keersten M; Bitting, Anna L; Mudenda, Lwiindi; Wright, David W
2016-12-01
We have developed a rapid magnetic microparticle-based detection strategy for malarial biomarkers Plasmodium lactate dehydrogenase (pLDH) and Plasmodium falciparum histidine-rich protein II (PfHRPII). In this assay, magnetic particles functionalized with antibodies specific for pLDH and PfHRPII as well as detection antibodies with distinct enzymes for each biomarker are added to parasitized lysed blood samples. Sandwich complexes for pLDH and PfHRPII form on the surface of the magnetic beads, which are washed and sequentially re-suspended in detection enzyme substrate for each antigen. The developed simultaneous capture and sequential detection (SCSD) assay detects both biomarkers in samples as low as 2.0parasites/µl, an order of magnitude below commercially available ELISA kits, has a total incubation time of 35min, and was found to be reproducible between users over time. This assay provides a simple and efficient alternative to traditional 96-well plate ELISAs, which take 5-8h to complete and are limited to one analyte. Further, the modularity of the magnetic bead-based SCSD ELISA format could serve as a platform for application to other diseases for which multi-biomarker detection is advantageous. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Use of negative binomial distribution to describe the presence of Anisakis in Thyrsites atun.
Peña-Rehbein, Patricio; De los Ríos-Escalante, Patricio
2012-01-01
Nematodes of the genus Anisakis have marine fishes as intermediate hosts. One of these hosts is Thyrsites atun, an important fishery resource in Chile between 38 and 41° S. This paper describes the frequency and number of Anisakis nematodes in the internal organs of Thyrsites atun. An analysis based on spatial distribution models showed that the parasites tend to be clustered. The variation in the number of parasites per host could be described by the negative binomial distribution. The maximum observed number of parasites was nine parasites per host. The environmental and zoonotic aspects of the study are also discussed.
Kadam, Shantanu; Vanka, Kumar
2013-02-15
Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations. Copyright © 2012 Wiley Periodicals, Inc.
Dorazio, Robert M.; Martin, Juulien; Edwards, Holly H.
2013-01-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
Dorazio, Robert M; Martin, Julien; Edwards, Holly H
2013-07-01
The class of N-mixture models allows abundance to be estimated from repeated, point count surveys while adjusting for imperfect detection of individuals. We developed an extension of N-mixture models to account for two commonly observed phenomena in point count surveys: rarity and lack of independence induced by unmeasurable sources of variation in the detectability of individuals. Rarity increases the number of locations with zero detections in excess of those expected under simple models of abundance (e.g., Poisson or negative binomial). Correlated behavior of individuals and other phenomena, though difficult to measure, increases the variation in detection probabilities among surveys. Our extension of N-mixture models includes a hurdle model of abundance and a beta-binomial model of detectability that accounts for additional (extra-binomial) sources of variation in detections among surveys. As an illustration, we fit this model to repeated point counts of the West Indian manatee, which was observed in a pilot study using aerial surveys. Our extension of N-mixture models provides increased flexibility. The effects of different sets of covariates may be estimated for the probability of occurrence of a species, for its mean abundance at occupied locations, and for its detectability.
Rogers, Angela; Nesbit, M. Andrew; Hannan, Fadil M.; Howles, Sarah A.; Gorvin, Caroline M.; Cranston, Treena; Allgrove, Jeremy; Bevan, John S.; Bano, Gul; Brain, Caroline; Datta, Vipan; Grossman, Ashley B.; Hodgson, Shirley V.; Izatt, Louise; Millar-Jones, Lynne; Pearce, Simon H.; Robertson, Lisa; Selby, Peter L.; Shine, Brian; Snape, Katie; Warner, Justin
2014-01-01
Context: Autosomal dominant hypocalcemia (ADH) types 1 and 2 are due to calcium-sensing receptor (CASR) and G-protein subunit-α11 (GNA11) gain-of-function mutations, respectively, whereas CASR and GNA11 loss-of-function mutations result in familial hypocalciuric hypercalcemia (FHH) types 1 and 2, respectively. Loss-of-function mutations of adaptor protein-2 sigma subunit (AP2σ 2), encoded by AP2S1, cause FHH3, and we therefore sought for gain-of-function AP2S1 mutations that may cause an additional form of ADH, which we designated ADH3. Objective: The objective of the study was to investigate the hypothesis that gain-of-function AP2S1 mutations may cause ADH3. Design: The sample size required for the detection of at least one mutation with a greater than 95% likelihood was determined by binomial probability analysis. Nineteen patients (including six familial cases) with hypocalcemia in association with low or normal serum PTH concentrations, consistent with ADH, but who did not have CASR or GNA11 mutations, were ascertained. Leukocyte DNA was used for sequence and copy number variation analysis of AP2S1. Results: Binomial probability analysis, using the assumption that AP2S1 mutations would occur in hypocalcemic patients at a prevalence of 20%, which is observed in FHH patients without CASR or GNA11 mutations, indicated that the likelihood of detecting at least one AP2S1 mutation was greater than 95% and greater than 98% in sample sizes of 14 and 19 hypocalcemic patients, respectively. AP2S1 mutations and copy number variations were not detected in the 19 hypocalcemic patients. Conclusion: The absence of AP2S1 abnormalities in hypocalcemic patients, suggests that ADH3 may not occur or otherwise represents a rare hypocalcemic disorder. PMID:24708097
The Predisposing Factors between Dental Caries and Deviations from Normal Weight.
Chopra, Amandeep; Rao, Nanak Chand; Gupta, Nidhi; Vashisth, Shelja; Lakhanpal, Manav
2015-04-01
Dental caries and deviations from normal weight are two conditions which share several broadly predisposing factors. So it's important to understand any relationship between dental state and body weight if either is to be managed appropriately. The study was done to find out the correlation between body mass index (BMI), diet, and dental caries among 12-15-year-old schoolgoing children in Panchkula District. A multistage sample of 12-15-year-old school children (n = 810) in Panchkula district, Haryana was considered. Child demographic details and diet history for 5 days was recorded. Data regarding dental caries status was collected using World Health Organization (1997) format. BMI was calculated and categorized according to the World Health Organization classification system for BMI. The data were subjected to statistical analysis using chi-square test and binomial regression developed using the Statistical Package for Social Sciences (SPSS) 20.0. The mean Decayed Missing Filled Teeth (DMFT) score was found to be 1.72 with decayed, missing, and filled teeth to be 1.22, 0.04, and 0.44, respectively. When the sample was assessed based on type of diet, it was found that vegetarians had higher mean DMFT (1.72) as compared to children having mixed diet. Overweight children had highest DMFT (3.21) which was followed by underweight (2.31) and obese children (2.23). Binomial regression revealed that females were 1.293 times at risk of developing caries as compared to males. Fair and poor Simplified-Oral Hygiene Index (OHI-S) showed 3.920 and 4.297 times risk of developing caries as compared to good oral hygiene, respectively. Upper high socioeconomic status (SES) is at most risk of developing caries. Underweight, overweight, and obese are at 2.7, 2.5, and 3 times risk of developing caries as compared to children with normal BMI, respectively. Dental caries and deviations from normal weight are two conditions which share several broadly predisposing factors such as diet, SES, lifestyle and other environmental factors.
Developing a Systematic Corrosion Control Evaluation Approach in Flint
Presentation covers what the projects were that were recommended by the Flint Safe Drinking Water Task Force for corrosion control assessment for Flint, focusing on the sequential sampling project, the pipe rigs, and pipe scale analyses.
A common mechanism underlies changes of mind about decisions and confidence.
van den Berg, Ronald; Anandalingam, Kavitha; Zylberberg, Ariel; Kiani, Roozbeh; Shadlen, Michael N; Wolpert, Daniel M
2016-02-01
Decisions are accompanied by a degree of confidence that a selected option is correct. A sequential sampling framework explains the speed and accuracy of decisions and extends naturally to the confidence that the decision rendered is likely to be correct. However, discrepancies between confidence and accuracy suggest that confidence might be supported by mechanisms dissociated from the decision process. Here we show that this discrepancy can arise naturally because of simple processing delays. When participants were asked to report choice and confidence simultaneously, their confidence, reaction time and a perceptual decision about motion were explained by bounded evidence accumulation. However, we also observed revisions of the initial choice and/or confidence. These changes of mind were explained by a continuation of the mechanism that led to the initial choice. Our findings extend the sequential sampling framework to vacillation about confidence and invites caution in interpreting dissociations between confidence and accuracy.
Bramley, Kyle; Pisani, Margaret A.; Murphy, Terrence E.; Araujo, Katy; Homer, Robert; Puchalski, Jonathan
2016-01-01
Background EBUS-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, when larger “core” biopsy samples of malignant tissue are required, TBNA may not suffice. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsies (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. Methods Fifty unselected patients undergoing convex probe EBUS were prospectively enrolled. Under EBUS guidance, all lymph nodes ≥ 1 cm were sequentially biopsied using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported on a per-patient basis. Results There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). For analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis was based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis was based only on TBNA samples. In some cases only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. Conclusions The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided specimens for clinical trials of malignancy when needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. PMID:26912301
Bramley, Kyle; Pisani, Margaret A; Murphy, Terrence E; Araujo, Katy L; Homer, Robert J; Puchalski, Jonathan T
2016-05-01
Endobronchial ultrasound (EBUS)-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, TBNA may not suffice when larger "core biopsy" samples of malignant tissue are required. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsy (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. The study prospectively enrolled 50 unselected patients undergoing convex-probe EBUS. All lymph nodes exceeding 1 cm were sequentially biopsied under EBUS guidance using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported for each patient. There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). On the one hand, for analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis were based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis were based only on TBNA samples. In some patients, only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided adequate specimens for clinical trials of malignancy when specimens from needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Elban, Mehmet
2017-01-01
The purpose of this research is to evaluate the teaching and educational activities in the civilization history lesson. The model of the research is the exploratory sequential design from mixed research patterns. The appropriate sampling method was used in the research. The qualitative data of the research were collected from 26 students through a…
Jose F. Negron; Willis C. Schaupp; Erik Johnson
2000-01-01
The Douglas-fir beetle, Dendroctonus pseudotsugae Hopkins, attacks Douglas-fir, Pseudotsuga menziesii (Mirb.) Franco (Pinaceae), throughout western North America. Periodic outbreaks cause increased mortality of its host. Land managers and forest health specialists often need to determine population trends of this insect. Bark samples were obtained from 326 trees...
XANES Spectroscopic Analysis of Phosphorus Speciation in Alum-Amended Poultry Litter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seiter,J.; Staats-Borda, K.; Ginder-Vogel, M.
2008-01-01
Aluminum sulfate (alum; Al2(SO4)3{center_dot}14H2O) is used as a chemical treatment of poultry litter to reduce the solubility and release of phosphate, thereby minimizing the impacts on adjacent aquatic ecosystems when poultry litter is land applied as a crop fertilizer. The objective of this study was to determine, through the use of X-ray absorption near edge structure (XANES) spectroscopy and sequential extraction, how alum amendments alter P distribution and solid-state speciation within the poultry litter system. Our results indicate that traditional sequential fractionation procedures may not account for variability in P speciation in heterogeneous animal manures. Analysis shows that NaOH-extracted Pmore » in alum amended litters is predominantly organic ({approx}80%), whereas in the control samples, >60% of NaOH-extracted P was inorganic P. Linear least squares fitting (LLSF) analysis of spectra collected of sequentially extracted litters showed that the P is present in inorganic (P sorbed on Al oxides, calcium phosphates) and organic forms (phytic acid, polyphosphates, and monoesters) in alum- and non-alum-amended poultry litter. When determining land application rates of poultry litter, all of these compounds must be considered, especially organic P. Results of the sequential extractions in conjunction with LLSF suggest that no P species is completely removed by a single extractant. Rather, there is a continuum of removal as extractant strength increases. Overall, alum-amended litters exhibited higher proportions of Al-bound P species and phytic acid, whereas untreated samples contained Ca-P minerals and organic P compounds. This study provides in situ information about P speciation in the poultry litter solid and about P availability in alum- and non-alum-treated poultry litter that will dictate P losses to ground and surface water systems.« less
NASA Technical Reports Server (NTRS)
Chapman, G. M. (Principal Investigator); Carnes, J. G.
1981-01-01
Several techniques which use clusters generated by a new clustering algorithm, CLASSY, are proposed as alternatives to random sampling to obtain greater precision in crop proportion estimation: (1) Proportional Allocation/relative count estimator (PA/RCE) uses proportional allocation of dots to clusters on the basis of cluster size and a relative count cluster level estimate; (2) Proportional Allocation/Bayes Estimator (PA/BE) uses proportional allocation of dots to clusters and a Bayesian cluster-level estimate; and (3) Bayes Sequential Allocation/Bayesian Estimator (BSA/BE) uses sequential allocation of dots to clusters and a Bayesian cluster level estimate. Clustering in an effective method in making proportion estimates. It is estimated that, to obtain the same precision with random sampling as obtained by the proportional sampling of 50 dots with an unbiased estimator, samples of 85 or 166 would need to be taken if dot sets with AI labels (integrated procedure) or ground truth labels, respectively were input. Dot reallocation provides dot sets that are unbiased. It is recommended that these proportion estimation techniques are maintained, particularly the PA/BE because it provides the greatest precision.
NASA Astrophysics Data System (ADS)
Menegário, Amauri A.; Giné, Maria Fernanda
2000-04-01
A synchronised flow system with hydride generation coupled to ICP-MS is proposed for the sequential determination of As and Se in natural waters and plant digests. The alternated mixing of the sample solution with thiourea or HCl for the determination of As or Se under optimized conditions was achieved using a flow commutator before the reaction with NaBH 4. The on-line addition of thiourea promoted the quantitative reduction of As(V) to As(III), thus enhancing sensitivity and precision. The selenium pre-reduction from Se(VI) to Se(IV) was produced by heating the sample with HCl, and the hydride generation was performed in 4 mol l -1 HCl, thus avoiding interference from thiourea. The system allowed the analysis of 20 samples h -1 with LOD values of 0.02 μg l -1 As and 0.03 μg l -1 Se. Results were in agreement with the certified values at the 95% confidence level for reference waters from the Canadian National Water Research Institute and plant samples from the National Institute of Standards and Technology (NIST).
Kim, C.S.; Bloom, N.S.; Rytuba, J.J.; Brown, Gordon E.
2003-01-01
Determining the chemical speciation of mercury in contaminated mining and industrial environments is essential for predicting its solubility, transport behavior, and potential bioavailability as well as for designing effective remediation strategies. In this study, two techniques for determining Hg speciation-X-ray absorption fine structure (XAFS) spectroscopy and sequential chemical extractions (SCE)-are independently applied to a set of samples with Hg concentrations ranging from 132 to 7539 mg/kg to determine if the two techniques provide comparable Hg speciation results. Generally, the proportions of insoluble HgS (cinnabar, metacinnabar) and HgSe identified by XAFS correlate well with the proportion of Hg removed in the aqua regia extraction demonstrated to remove HgS and HgSe. Statistically significant (> 10%) differences are observed however in samples containing more soluble Hg-containing phases (HgCl2, HgO, Hg3S2O 4). Such differences may be related to matrix, particle size, or crystallinity effects, which could affect the apparent solubility of Hg phases present. In more highly concentrated samples, microscopy techniques can help characterize the Hg-bearing species in complex multiphase natural samples.
Differential expression analysis for RNAseq using Poisson mixed models
Sun, Shiquan; Hood, Michelle; Scott, Laura; Peng, Qinke; Mukherjee, Sayan; Tung, Jenny
2017-01-01
Abstract Identifying differentially expressed (DE) genes from RNA sequencing (RNAseq) studies is among the most common analyses in genomics. However, RNAseq DE analysis presents several statistical and computational challenges, including over-dispersed read counts and, in some settings, sample non-independence. Previous count-based methods rely on simple hierarchical Poisson models (e.g. negative binomial) to model independent over-dispersion, but do not account for sample non-independence due to relatedness, population structure and/or hidden confounders. Here, we present a Poisson mixed model with two random effects terms that account for both independent over-dispersion and sample non-independence. We also develop a scalable sampling-based inference algorithm using a latent variable representation of the Poisson distribution. With simulations, we show that our method properly controls for type I error and is generally more powerful than other widely used approaches, except in small samples (n <15) with other unfavorable properties (e.g. small effect sizes). We also apply our method to three real datasets that contain related individuals, population stratification or hidden confounders. Our results show that our method increases power in all three data compared to other approaches, though the power gain is smallest in the smallest sample (n = 6). Our method is implemented in MACAU, freely available at www.xzlab.org/software.html. PMID:28369632
MSeq-CNV: accurate detection of Copy Number Variation from Sequencing of Multiple samples.
Malekpour, Seyed Amir; Pezeshk, Hamid; Sadeghi, Mehdi
2018-03-05
Currently a few tools are capable of detecting genome-wide Copy Number Variations (CNVs) based on sequencing of multiple samples. Although aberrations in mate pair insertion sizes provide additional hints for the CNV detection based on multiple samples, the majority of the current tools rely only on the depth of coverage. Here, we propose a new algorithm (MSeq-CNV) which allows detecting common CNVs across multiple samples. MSeq-CNV applies a mixture density for modeling aberrations in depth of coverage and abnormalities in the mate pair insertion sizes. Each component in this mixture density applies a Binomial distribution for modeling the number of mate pairs with aberration in the insertion size and also a Poisson distribution for emitting the read counts, in each genomic position. MSeq-CNV is applied on simulated data and also on real data of six HapMap individuals with high-coverage sequencing, in 1000 Genomes Project. These individuals include a CEU trio of European ancestry and a YRI trio of Nigerian ethnicity. Ancestry of these individuals is studied by clustering the identified CNVs. MSeq-CNV is also applied for detecting CNVs in two samples with low-coverage sequencing in 1000 Genomes Project and six samples form the Simons Genome Diversity Project.
Hee, Siew Wan; Parsons, Nicholas; Stallard, Nigel
2018-03-01
The motivation for the work in this article is the setting in which a number of treatments are available for evaluation in phase II clinical trials and where it may be infeasible to try them concurrently because the intended population is small. This paper introduces an extension of previous work on decision-theoretic designs for a series of phase II trials. The program encompasses a series of sequential phase II trials with interim decision making and a single two-arm phase III trial. The design is based on a hybrid approach where the final analysis of the phase III data is based on a classical frequentist hypothesis test, whereas the trials are designed using a Bayesian decision-theoretic approach in which the unknown treatment effect is assumed to follow a known prior distribution. In addition, as treatments are intended for the same population it is not unrealistic to consider treatment effects to be correlated. Thus, the prior distribution will reflect this. Data from a randomized trial of severe arthritis of the hip are used to test the application of the design. We show that the design on average requires fewer patients in phase II than when the correlation is ignored. Correspondingly, the time required to recommend an efficacious treatment for phase III is quicker. © 2017 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Sahlstedt, Elina; Arppe, Laura
2017-04-01
Stable isotope composition of bones, analysed either from the mineral phase (hydroxyapatite) or from the organic phase (mainly collagen) carry important climatological and ecological information and are therefore widely used in paleontological and archaeological research. For the analysis of the stable isotope compositions, both of the phases, hydroxyapatite and collagen, have their more or less well established separation and analytical techniques. Recent development in IRMS and wet chemical extraction methods have facilitated the analysis of very small bone fractions (500 μg or less starting material) for PO43-O isotope composition. However, the uniqueness and (pre-) historical value of each archaeological and paleontological finding lead to preciously little material available for stable isotope analyses, encouraging further development of microanalytical methods for the use of stable isotope analyses. Here we present the first results in developing extraction methods for combining collagen C- and N-isotope analyses to PO43-O-isotope analyses from a single bone sample fraction. We tested sequential extraction starting with dilute acid demineralization and collection of both collagen and PO43-fractions, followed by further purification step by H2O2 (PO43-fraction). First results show that bone sample separates as small as 2 mg may be analysed for their δ15N, δ13C and δ18OPO4 values. The method may be incorporated in detailed investigation of sequentially developing skeletal material such as teeth, potentially allowing for the investigation of interannual variability in climatological/environmental signals or investigation of the early life history of an individual.
Vuckovic, Anita; Kwantes, Peter J; Neal, Andrew
2013-09-01
Research has identified a wide range of factors that influence performance in relative judgment tasks. However, the findings from this research have been inconsistent. Studies have varied with respect to the identification of causal variables and the perceptual and decision-making mechanisms underlying performance. Drawing on the ecological rationality approach, we present a theory of the judgment and decision-making processes involved in a relative judgment task that explains how people judge a stimulus and adapt their decision process to accommodate their own uncertainty associated with those judgments. Undergraduate participants performed a simulated air traffic control conflict detection task. Across two experiments, we systematically manipulated variables known to affect performance. In the first experiment, we manipulated the relative distances of aircraft to a common destination while holding aircraft speeds constant. In a follow-up experiment, we introduced a direct manipulation of relative speed. We then fit a sequential sampling model to the data, and used the best fitting parameters to infer the decision-making processes responsible for performance. Findings were consistent with the theory that people adapt to their own uncertainty by adjusting their criterion and the amount of time they take to collect evidence in order to make a more accurate decision. From a practical perspective, the paper demonstrates that one can use a sequential sampling model to understand performance in a dynamic environment, allowing one to make sense of and interpret complex patterns of empirical findings that would otherwise be difficult to interpret using standard statistical analyses. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Lemons, B; Khaing, H; Ward, A; Thakur, P
2018-06-01
A new sequential separation method for the determination of polonium and actinides (Pu, Am and U) in drinking water samples has been developed that can be used for emergency response or routine water analyses. For the first time, the application of TEVA chromatography column in the sequential separation of polonium and plutonium has been studied. This method utilizes a rapid Fe +3 co-precipitation step to remove matrix interferences, followed by plutonium oxidation state adjustment to Pu 4+ and an incubation period of ~ 1 h at 50-60 °C to allow Po 2+ to oxidize to Po 4+ . The polonium and plutonium were then separated on a TEVA column, while separation of americium from uranium was performed on a TRU column. After separation, polonium was micro-precipitated with copper sulfide (CuS), while actinides were micro co-precipitated using neodymium fluoride (NdF 3 ) for counting by the alpha spectrometry. The method is simple, robust and can be performed quickly with excellent removal of interferences, high chemical recovery and very good alpha peak resolution. The efficiency and reliability of the procedures were tested by using spiked samples. The effect of several transition metals (Cu 2+ , Pb 2+ , Fe 3+ , Fe 2+ , and Ni 2+ ) on the performance of this method were also assessed to evaluate the potential matrix effects. Studies indicate that presence of up to 25 mg of these cations in the samples had no adverse effect on the recovery or the resolution of polonium alpha peaks. Copyright © 2018 Elsevier Ltd. All rights reserved.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Galen, T. J. (Inventor)
1986-01-01
A fluid sampler for collecting a plurality of discrete samples over separate time intervals is described. The sampler comprises a sample assembly having an inlet and a plurality of discreet sample tubes each of which has inlet and outlet sides. A multiport dual acting valve is provided in the sampler in order to sequentially pass air from the sample inlet into the selected sample tubes. The sample tubes extend longitudinally of the housing and are located about the outer periphery thereof so that upon removal of an enclosure cover, they are readily accessible for operation of the sampler in an analysis mode.
Pistón, Mariela; Mollo, Alicia; Knochen, Moisés
2011-01-01
A fast and efficient automated method using a sequential injection analysis (SIA) system, based on the Griess, reaction was developed for the determination of nitrate and nitrite in infant formulas and milk powder. The system enables to mix a measured amount of sample (previously constituted in the liquid form and deproteinized) with the chromogenic reagent to produce a colored substance whose absorbance was recorded. For nitrate determination, an on-line prereduction step was added by passing the sample through a Cd minicolumn. The system was controlled from a PC by means of a user-friendly program. Figures of merit include linearity (r2 > 0.999 for both analytes), limits of detection (0.32 mg kg−1 NO3-N, and 0.05 mg kg−1 NO2-N), and precision (sr%) 0.8–3.0. Results were statistically in good agreement with those obtained with the reference ISO-IDF method. The sampling frequency was 30 hour−1 (nitrate) and 80 hour−1 (nitrite) when performed separately. PMID:21960750
An apparatus for sequentially combining microvolumes of reagents by infrasonic mixing.
Camien, M N; Warner, R C
1984-05-01
A method employing high-speed infrasonic mixing for obtaining timed samples for following the progress of a moderately rapid chemical reaction is described. Drops of 10 to 50 microliter each of two reagents are mixed to initiate the reaction, followed, after a measured time interval, by mixing with a drop of a third reagent to quench the reaction. The method was developed for measuring the rate of denaturation of covalently closed, circular DNA in NaOH at several temperatures. For this purpose the timed samples were analyzed by analytical ultracentrifugation. The apparatus was tested by determination of the rate of hydrolysis of 2,4-dinitrophenyl acetate in an alkaline buffer. The important characteristics of the method are (i) it requires very small volumes of sample and reagents; (ii) the components of the reaction mixture are pre-equilibrated and mixed with no transfer outside the prescribed constant temperature environment; (iii) the mixing is very rapid; and (iv) satisfactorily precise measurements of relatively short time intervals (approximately 2 sec minimum) between sequential mixings of the components are readily obtainable.
Furukawa, Makoto; Takagai, Yoshitaka
2016-10-04
Online solid-phase extraction (SPE) coupled with inductively coupled plasma mass spectrometry (ICPMS) is a useful tool in automatic sequential analysis. However, it cannot simultaneously quantify the analytical targets and their recovery percentages (R%) in one-shot samples. We propose a system that simultaneously acquires both data in a single sample injection. The main flowline of the online solid-phase extraction is divided into main and split flows. The split flow line (i.e., bypass line), which circumvents the SPE column, was placed on the main flow line. Under program-controlled switching of the automatic valve, the ICPMS sequentially measures the targets in a sample before and after column preconcentration and determines the target concentrations and the R% on the SPE column. This paper describes the system development and two demonstrations to exhibit the analytical significance, i.e., the ultratrace amounts of radioactive strontium ( 90 Sr) using commercial Sr-trap resin and multielement adsorbability on the SPE column. This system is applicable to other flow analyses and detectors in online solid phase extraction.
NASA Astrophysics Data System (ADS)
Gonderman, S.; Tripathi, J. K.; Sizyuk, T.; Hassanein, A.
2017-08-01
Tungsten (W) has been selected as the divertor material in ITER based on its promising thermal and mechanical properties. Despite these advantages, continued investigation has revealed W to undergo extreme surface morphology evolution in response to relevant fusion operating conditions. These complications spur the need for further exploration of W and other innovative plasma facing components (PFCs) for future fusion devices. Recent literature has shown that alloying of W with other refractory metals, such as tantalum (Ta), results in the enhancement of key PFC properties including, but not limited to, ductility, hydrogen isotope retention, and helium ion (He+) radiation tolerance. In the present study, pure W and W-Ta alloys are exposed to simultaneous and sequential low energy, He+ and deuterium (D+) ion beam irradiations at high (1223 K) and low (523 K) temperatures. The goal of this study is to cultivate a complete understanding of the synergistic effects induced by dual and sequential ion irradiation on W and W-Ta alloy surface morphology evolution. For the dual ion beam experiments, W and W-Ta samples were subjected to four different He+: D+ ion ratios (100% He+, 60% D+ + 40% He+, 90% D+ + 10% He+ and 100% D+) having a total constant He+ fluence of 6 × 1024 ion m-2. The W and W-Ta samples both exhibit the expected damaged surfaces under the 100% He+ irradiation, but as the ratio of D+/He+ ions increases there is a clear suppression of the surface morphology at high temperatures. This observation is supported by the sequential experiments, which show a similar suppression of surface morphology when W and W-Ta samples are first exposed to low energy He+ irradiation and then exposed to subsequent low energy D+ irradiation at high temperatures. Interestingly, this morphology suppression is not observed at low temperatures, implying there is a D-W interaction mechanism which is dependent on temperature that is driving the suppression of the microstructure evolution in both the pure W and W-Ta alloys. Minor irradiation tolerance enhancement in the performance of the W-Ta samples is also observed.
Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao
2018-01-01
After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress. PMID:29719522
Dave, Hreem; Phoenix, Vidya; Becker, Edmund R; Lambert, Scott R
2010-08-01
To compare the incidence of adverse events and visual outcomes and to compare the economic costs of sequential vs simultaneous bilateral cataract surgery for infants with congenital cataracts. Retrospective review of simultaneous vs sequential bilateral cataract surgery for infants with congenital cataracts who underwent cataract surgery when 6 months or younger at our institution. Records were available for 10 children who underwent sequential surgery at a mean age of 49 days for the first eye and 17 children who underwent simultaneous surgery at a mean age of 68 days (P = .25). We found a similar incidence of adverse events between the 2 treatment groups. Intraoperative or postoperative complications occurred in 14 eyes. The most common postoperative complication was glaucoma. No eyes developed endophthalmitis. The mean (SD) absolute interocular difference in logMAR visual acuities between the 2 treatment groups was 0.47 (0.76) for the sequential group and 0.44 (0.40) for the simultaneous group (P = .92). Payments for the hospital, drugs, supplies, and professional services were on average 21.9% lower per patient in the simultaneous group. Simultaneous bilateral cataract surgery for infants with congenital cataracts is associated with a 21.9% reduction in medical payments and no discernible difference in the incidence of adverse events or visual outcomes. However, our small sample size limits our ability to make meaningful comparisons of the relative risks and visual benefits of the 2 procedures.
Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao
2018-01-01
After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work-family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work-family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work-family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women's perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work-family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work-family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.
Richards, Tara N; Branch, Kathryn A; Ray, Katherine
2014-01-01
Little is known about the role social support may play in reducing the risk of adolescent dating violence perpetration and victimization. This study is a longitudinal analysis of the independent impact of social support from friends and parents on the risk of emotional and physical dating violence perpetration and victimization among a large sample of female youth (n = 346). Findings indicate that 22% of the sample indicated perpetrating physical dating violence against a partner, whereas almost 16% revealed being the victim of physical dating violence; 34% of the sample indicated perpetrating emotional dating violence against a partner, whereas almost 39% revealed being the victim of emotional dating violence. Negative binomial regression models indicated that increased levels of support from friends at Time 1 was associated with significantly less physical and emotional dating violence perpetration and emotional (but not physical) dating violence victimization at Time 2. Parental support was not significantly related to dating violence in any model. Implications for dating violence curriculum and future research are addressed.
The Association between Romantic Relationships and Delinquency in Adolescence and Young Adulthood
Cui, Ming; Ueno, Koji; Fincham, Frank D.; Donnellan, M. Brent; Wickrama, K. A. S.
2011-01-01
This study examined the association between romantic relationships and delinquency in adolescence and young adulthood. Using a large, longitudinal, and nationally representative sample, results from negative binomial regressions showed a positive association between romantic involvement and delinquency in adolescence. Further, the cumulative number of romantic relationships from adolescence to young adulthood was positively related to delinquency in young adulthood even controlling for earlier delinquency in adolescence. These analyses also controlled for the effects of participant gender, age at initial assessment, puberty, race/ethnicity, and other demographic characteristics (e.g., family structure and parents’ education). Findings are discussed in terms of their implications for understanding the role of romantic relationships in the development of young people and for stimulating future research questions. PMID:22984343
NASA Technical Reports Server (NTRS)
Elizalde, E.; Gaztanaga, E.
1992-01-01
The dependence of counts in cells on the shape of the cell for the large scale galaxy distribution is studied. A very concrete prediction can be done concerning the void distribution for scale invariant models. The prediction is tested on a sample of the CfA catalog, and good agreement is found. It is observed that the probability of a cell to be occupied is bigger for some elongated cells. A phenomenological scale invariant model for the observed distribution of the counts in cells, an extension of the negative binomial distribution, is presented in order to illustrate how this dependence can be quantitatively determined. An original, intuitive derivation of this model is presented.
Spahr, N.E.; Boulger, R.W.
1997-01-01
Quality-control samples provide part of the information needed to estimate the bias and variability that result from sample collection, processing, and analysis. Quality-control samples of surface water collected for the Upper Colorado River National Water-Quality Assessment study unit for water years 1995?96 are presented and analyzed in this report. The types of quality-control samples collected include pre-processing split replicates, concurrent replicates, sequential replicates, post-processing split replicates, and field blanks. Analysis of the pre-processing split replicates, concurrent replicates, sequential replicates, and post-processing split replicates is based on differences between analytical results of the environmental samples and analytical results of the quality-control samples. Results of these comparisons indicate that variability introduced by sample collection, processing, and handling is low and will not affect interpretation of the environmental data. The differences for most water-quality constituents is on the order of plus or minus 1 or 2 lowest rounding units. A lowest rounding unit is equivalent to the magnitude of the least significant figure reported for analytical results. The use of lowest rounding units avoids some of the difficulty in comparing differences between pairs of samples when concentrations span orders of magnitude and provides a measure of the practical significance of the effect of variability. Analysis of field-blank quality-control samples indicates that with the exception of chloride and silica, no systematic contamination of samples is apparent. Chloride contamination probably was the result of incomplete rinsing of the dilute cleaning solution from the outlet ports of the decaport sample splitter. Silica contamination seems to have been introduced by the blank water. Sampling and processing procedures for water year 1997 have been modified as a result of these analyses.
Generalization of multifractal theory within quantum calculus
NASA Astrophysics Data System (ADS)
Olemskoi, A.; Shuda, I.; Borisyuk, V.
2010-03-01
On the basis of the deformed series in quantum calculus, we generalize the partition function and the mass exponent of a multifractal, as well as the average of a random variable distributed over a self-similar set. For the partition function, such expansion is shown to be determined by binomial-type combinations of the Tsallis entropies related to manifold deformations, while the mass exponent expansion generalizes the known relation τq=Dq(q-1). We find the equation for the set of averages related to ordinary, escort, and generalized probabilities in terms of the deformed expansion as well. Multifractals related to the Cantor binomial set, exchange currency series, and porous-surface condensates are considered as examples.
Using real options analysis to support strategic management decisions
NASA Astrophysics Data System (ADS)
Kabaivanov, Stanimir; Markovska, Veneta; Milev, Mariyan
2013-12-01
Decision making is a complex process that requires taking into consideration multiple heterogeneous sources of uncertainty. Standard valuation and financial analysis techniques often fail to properly account for all these sources of risk as well as for all sources of additional flexibility. In this paper we explore applications of a modified binomial tree method for real options analysis (ROA) in an effort to improve decision making process. Usual cases of use of real options are analyzed with elaborate study on the applications and advantages that company management can derive from their application. A numeric results based on extending simple binomial tree approach for multiple sources of uncertainty are provided to demonstrate the improvement effects on management decisions.
RnaSeqSampleSize: real data based sample size estimation for RNA sequencing.
Zhao, Shilin; Li, Chung-I; Guo, Yan; Sheng, Quanhu; Shyr, Yu
2018-05-30
One of the most important and often neglected components of a successful RNA sequencing (RNA-Seq) experiment is sample size estimation. A few negative binomial model-based methods have been developed to estimate sample size based on the parameters of a single gene. However, thousands of genes are quantified and tested for differential expression simultaneously in RNA-Seq experiments. Thus, additional issues should be carefully addressed, including the false discovery rate for multiple statistic tests, widely distributed read counts and dispersions for different genes. To solve these issues, we developed a sample size and power estimation method named RnaSeqSampleSize, based on the distributions of gene average read counts and dispersions estimated from real RNA-seq data. Datasets from previous, similar experiments such as the Cancer Genome Atlas (TCGA) can be used as a point of reference. Read counts and their dispersions were estimated from the reference's distribution; using that information, we estimated and summarized the power and sample size. RnaSeqSampleSize is implemented in R language and can be installed from Bioconductor website. A user friendly web graphic interface is provided at http://cqs.mc.vanderbilt.edu/shiny/RnaSeqSampleSize/ . RnaSeqSampleSize provides a convenient and powerful way for power and sample size estimation for an RNAseq experiment. It is also equipped with several unique features, including estimation for interested genes or pathway, power curve visualization, and parameter optimization.
Silva, Alisson R; Rodrigues-Silva, Nilson; Pereira, Poliana S; Sarmento, Renato A; Costa, Thiago L; Galdino, Tarcísio V S; Picanço, Marcelo C
2017-12-05
The common blossom thrips, Frankliniella schultzei Trybom (Thysanoptera: Thripidae), is an important lettuce pest worldwide. Conventional sampling plans are the first step in implementing decision-making systems into integrated pest management programs. However, this tool is not available for F. schultzei infesting lettuce crops. Thus, the objective of this work was to develop a conventional sampling plan for F. schultzei in lettuce crops. Two sampling techniques (direct counting and leaf beating on a white plastic tray) were compared in crisphead, looseleaf, and Boston lettuce varieties before and during head formation. The frequency distributions of F. schultzei densities in lettuce crops were assessed, and the number of samples required to compose the sampling plan was determined. Leaf beating on a white plastic tray was the best sampling technique. F. schultzei densities obtained with this technique were fitted to the negative binomial distribution with a common aggregation parameter (common K = 0.3143). The developed sampling plan is composed of 91 samples per field and presents low errors in its estimates (up to 20%), fast execution time (up to 47 min), and low cost (up to US $1.67 per sampling area). This sampling plan can be used as a tool for integrated pest management in lettuce crops, assisting with reliable decision making in different lettuce varieties before and during head formation. © The Author(s) 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Passive Baited Sequential Fly Trap
USDA-ARS?s Scientific Manuscript database
Sampling fly populations associated with human populations is needed to understand diel behavior and to monitor population densities before and after control operations. Population control measures are dependent on the results of monitoring efforts as they may provide insight into the fly behavior ...
P-Hacking in Orthopaedic Literature: A Twist to the Tail.
Bin Abd Razak, Hamid Rahmatullah; Ang, Jin-Guang Ernest; Attal, Hersh; Howe, Tet-Sen; Allen, John Carson
2016-10-19
"P-hacking" occurs when researchers preferentially select data or statistical analyses until nonsignificant results become significant. We wanted to evaluate if the phenomenon of p-hacking was evident in orthopaedic literature. We text-mined through all articles published in three top orthopaedic journals in 2015. For anonymity, we cipher-coded the three journals. We included all studies that reported a single p value to answer their main hypothesis. These p values were then charted and frequency graphs were generated to illustrate any evidence of p-hacking. Binomial tests were employed to look for evidence of evidential value and significance of p-hacking. Frequency plots for all three journals revealed evidence of p-hacking. Binomial tests for all three journals were significant for evidence of evidential value (p < 0.0001 for all). However, the binomial test for p-hacking was significant only for one journal (p = 0.0092). P-hacking is an evolving phenomenon that threatens to jeopardize the evidence-based practice of medicine. Although our results show that there is good evidential value for orthopaedic literature published in our top journals, there is some evidence of p-hacking of which authors and readers should be wary. Copyright © 2016 by The Journal of Bone and Joint Surgery, Incorporated.
Martina, R; Kay, R; van Maanen, R; Ridder, A
2015-01-01
Clinical studies in overactive bladder have traditionally used analysis of covariance or nonparametric methods to analyse the number of incontinence episodes and other count data. It is known that if the underlying distributional assumptions of a particular parametric method do not hold, an alternative parametric method may be more efficient than a nonparametric one, which makes no assumptions regarding the underlying distribution of the data. Therefore, there are advantages in using methods based on the Poisson distribution or extensions of that method, which incorporate specific features that provide a modelling framework for count data. One challenge with count data is overdispersion, but methods are available that can account for this through the introduction of random effect terms in the modelling, and it is this modelling framework that leads to the negative binomial distribution. These models can also provide clinicians with a clearer and more appropriate interpretation of treatment effects in terms of rate ratios. In this paper, the previously used parametric and non-parametric approaches are contrasted with those based on Poisson regression and various extensions in trials evaluating solifenacin and mirabegron in patients with overactive bladder. In these applications, negative binomial models are seen to fit the data well. Copyright © 2014 John Wiley & Sons, Ltd.
Statistical tests to compare motif count exceptionalities
Robin, Stéphane; Schbath, Sophie; Vandewalle, Vincent
2007-01-01
Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use. PMID:17346349
NASA Technical Reports Server (NTRS)
Generazio, Edward R.
2014-01-01
Unknown risks are introduced into failure critical systems when probability of detection (POD) capabilities are accepted without a complete understanding of the statistical method applied and the interpretation of the statistical results. The presence of this risk in the nondestructive evaluation (NDE) community is revealed in common statements about POD. These statements are often interpreted in a variety of ways and therefore, the very existence of the statements identifies the need for a more comprehensive understanding of POD methodologies. Statistical methodologies have data requirements to be met, procedures to be followed, and requirements for validation or demonstration of adequacy of the POD estimates. Risks are further enhanced due to the wide range of statistical methodologies used for determining the POD capability. Receiver/Relative Operating Characteristics (ROC) Display, simple binomial, logistic regression, and Bayes' rule POD methodologies are widely used in determining POD capability. This work focuses on Hit-Miss data to reveal the framework of the interrelationships between Receiver/Relative Operating Characteristics Display, simple binomial, logistic regression, and Bayes' Rule methodologies as they are applied to POD. Knowledge of these interrelationships leads to an intuitive and global understanding of the statistical data, procedural and validation requirements for establishing credible POD estimates.
Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
[Sequential monitoring of renal transplant with aspiration cytology].
Manfro, R C; Gonçalves, L F; de Moura, L A
1998-01-01
To evaluate the utility of kidney aspiration cytology in the sequential monitorization of acute rejection in renal transplant patients. Thirty patients were submitted to 376 aspirations. The clinical diagnoses were independently established. The representativity of the samples reached 82.7%. The total corrected increment index and the number of immunoactivated cells were higher during acute rejection as compared to normal allograft function, acute tubular necrosis, and cyclosporine nephrotoxicity. The parameters to the diagnosis of acute rejection were sensitivity: 71.8%, specificity: 87.3%, positive predictive value: 50.9%, negative predictive value: 94.9% and accuracy 84.9%. The false positive results were mainly related to cytomegalovirus infection or to the administration of OKT3. In 10 out of 11 false negative results incipient immunoactivation was present alerting to the possibility of acute rejection. Kidney aspiration cytology is a useful tool for the sequential monitorization of acute rejection in renal transplant patients. The best results are reached when the results of aspiration cytology are analyzed with the clinical data.
Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing
2017-11-10
The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.
Zou, Wei; Sissons, Mike; Gidley, Michael J; Gilbert, Robert G; Warren, Frederick J
2015-12-01
The aim of the present study is to characterise the influence of gluten structure on the kinetics of starch hydrolysis in pasta. Spaghetti and powdered pasta were prepared from three different cultivars of durum semolina, and starch was also purified from each cultivar. Digestion kinetic parameters were obtained through logarithm-of-slope analysis, allowing identification of sequential digestion steps. Purified starch and semolina were digested following a single first-order rate constant, while pasta and powdered pasta followed two sequential first-order rate constants. Rate coefficients were altered by pepsin hydrolysis. Confocal microscopy revealed that, following cooking, starch granules were completely swollen for starch, semolina and pasta powder samples. In pasta, they were completely swollen in the external regions, partially swollen in the intermediate region and almost intact in the pasta strand centre. Gluten entrapment accounts for sequential kinetic steps in starch digestion of pasta; the compact microstructure of pasta also reduces digestion rates. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lopes, G; Costa, E T S; Penido, E S; Sparks, D L; Guilherme, L R G
2015-09-01
Mining and smelting activities are potential sources of heavy metal contamination, which pose a threat to human health and ecological systems. This study investigated single and sequential extractions of Zn, Pb, and Cd in Brazilian soils affected by mining and smelting activities. Soils from a Zn mining area (soils A, B, C, D, E, and the control soil) and a tailing from a smelting area were collected in Minas Gerais state, Brazil. The samples were subjected to single (using Mehlich I solution) and sequential extractions. The risk assessment code (RAC), the redistribution index (U ts ), and the reduced partition index (I R ) have been applied to the sequential extraction data. Zinc and Cd, in soil samples from the mining area, were found mainly associated with carbonate forms. This same pattern did not occur for Pb. Moreover, the Fe-Mn oxides and residual fractions had important contributions for Zn and Pb in those soils. For the tailing, more than 70 % of Zn and Cd were released in the exchangeable fraction, showing a much higher mobility and availability of these metals at this site, which was also supported by results of RAC and I R . These differences in terms of mobility might be due to different chemical forms of the metals in the two sites, which are attributable to natural occurrence as well as ore processing.
Santos, A J G; Mazzilli, B P; Fávaro, D I T; Silva, P S C
2006-01-01
Phosphogypsum is a waste produced by the phosphate fertilizer industry. Although phosphogypsum is mainly calcium sulphate dihydrate, it contains elevated levels of impurities, which originate from the source phosphate rock used in the phosphoric acid production. Among these impurities, radionuclides from 238U and 232Th decay series are of most concern due to their radiotoxicity. Other elements, such as rare earth elements (REE) and Ba are also enriched in the phosphogypsum. The bioavailability of radionuclides (226Ra, 210Pb and 232Th), rare earth elements and Ba to the surrounding aquatic system was evaluated by the application of sequential leaching of the phosphogypsum samples from the Brazilian phosphoric acid producers. The sequential extraction results show that most of the radium and lead are located in the "iron oxide" (non-CaSO4) fraction, and that only 13-18% of these radionuclides are distributed in the most labile fraction. Th, REE and Ba were found predominantly in the residual phase, which corresponds to a small fraction of the phosphate rock or monazite that did not react and to insoluble compounds such as sulphates, phosphates and silicates. It can be concluded that although all these elements are enriched in the phosphogypsum samples they are not associated with CaSO4 itself and therefore do not represent a threat to the surrounding aquatic environment.
Lee, Chang-Hyung; Derby, Richard; Choi, Hyun-Seok; Lee, Sang-Heon; Kim, Se Hoon; Kang, Yoon Kyu
2010-01-01
One technique in radiofrequency neurotomies uses 2 electrodes that are simultaneously placed to lie parallel to one another. Comparing lesions on cadaveric interspinous ligament tissue and measuring the temperature change in egg white allows us to accurately measure quantitatively the area of the lesion. Fresh cadaver spinal tissue and egg white tissue were used. A series of samples were prepared with the electrodes placed 1 to 7 mm apart. Using radiofrequency, the needle electrodes were heated in sequential or simultaneous order and the distance of the escaped lesion area and temperature were measured. Samples of cadaver interspinous ligament showed sequential heating of the needles limits the placement of the needle electrodes up to 2 mm apart from each other and up to 4 mm apart when heated simultaneously. The temperature at the escaped lesion area decreased according to the distance for egg white. There was a significant difference in temperature at the escaped lesion area up to 6 mm apart and the temperature was above 50 degrees celsius up to 5 mm in simultaneous lesion and 3 mm in the sequential lesion. The limitations of this study include cadaveric experimentation and use of intraspinous ligament rather than medial branch of the dorsal ramus which is difficult to identify. Heating the 2 electrodes simultaneously appears to coagulate a wider area and potentially produce better results in less time.
Impact of cigarette smoking on utilization of nursing home services.
Warner, Kenneth E; McCammon, Ryan J; Fries, Brant E; Langa, Kenneth M
2013-11-01
Few studies have examined the effects of smoking on nursing home utilization, generally using poor data on smoking status. No previous study has distinguished utilization for recent from long-term quitters. Using the Health and Retirement Study, we assessed nursing home utilization by never-smokers, long-term quitters (quit >3 years), recent quitters (quit ≤3 years), and current smokers. We used logistic regression to evaluate the likelihood of a nursing home admission. For those with an admission, we used negative binomial regression on the number of nursing home nights. Finally, we employed zero-inflated negative binomial regression to estimate nights for the full sample. Controlling for other variables, compared with never-smokers, long-term quitters have an odds ratio (OR) for nursing home admission of 1.18 (95% CI: 1.07-1.2), current smokers 1.39 (1.23-1.57), and recent quitters 1.55 (1.29-1.87). The probability of admission rises rapidly with age and is lower for African Americans and Hispanics, more affluent respondents, respondents with a spouse present in the home, and respondents with a living child. Given admission, smoking status is not associated with length of stay (LOS). LOS is longer for older respondents and women and shorter for more affluent respondents and those with spouses present. Compared with otherwise identical never-smokers, former and current smokers have a significantly increased risk of nursing home admission. That recent quitters are at greatest risk of admission is consistent with evidence that many stop smoking because they are sick, often due to smoking.
Dental Caries and Enamel Defects in Very Low Birth Weight Adolescents
Nelson, S.; Albert, J.M.; Lombardi, G.; Wishnek, S.; Asaad, G.; Kirchner, H.L.; Singer, L.T.
2011-01-01
Objectives The purpose of this study was to examine developmental enamel defects and dental caries in very low birth weight adolescents with high risk (HR-VLBW) and low risk (LR-VLBW) compared to full-term (term) adolescents. Methods The sample consisted of 224 subjects (80 HR-VLBW, 59 LR-VLBW, 85 term adolescents) recruited from an ongoing longitudinal study. Sociodemographic and medical information was available from birth. Dental examination of the adolescent at the 14-year visit included: enamel defects (opacity and hypoplasia); decayed, missing, filled teeth of incisors and molars (DMFT-IM) and of overall permanent teeth (DMFT); Simplified Oral Hygiene Index for debris/calculus on teeth, and sealant presence. A caregiver questionnaire completed simultaneously assessed dental behavior, access, insurance status and prevention factors. Hierarchical analysis utilized the zero-inflated negative binomial model and zero-inflated Poisson model. Results The zero-inflated negative binomial model controlling for sociodemographic variables indicated that the LR-VLBW group had an estimated 75% increase (p < 0.05) in number of demarcated opacities in the incisors and first molar teeth compared to the term group. Hierarchical modeling indicated that demarcated opacities were a significant predictor of DMFT-IM after control for relevant covariates. The term adolescents had significantly increased DMFT-IM and DMFT scores compared to the LR-VLBW adolescents. Conclusion LR-VLBW was a significant risk factor for increased enamel defects in the permanent incisors and first molars. Term children had increased caries compared to the LR-VLBW group. The effect of birth group and enamel defects on caries has to be investigated longitudinally from birth. PMID:20975268
Hopelessness as a Predictor of Suicide Ideation in Depressed Male and Female Adolescent Youth.
Wolfe, Kristin L; Nakonezny, Paul A; Owen, Victoria J; Rial, Katherine V; Moorehead, Alexandra P; Kennard, Beth D; Emslie, Graham J
2017-12-21
We examined hopelessness as a predictor of suicide ideation in depressed youth after acute medication treatment. A total of 158 depressed adolescents were administered the Children's Depression Rating Scale-Revised (CDRS-R) and Columbia Suicide Severity Rating Scale (C-SSRS) as part of a larger battery at baseline and at weekly visits across 6 weeks of acute fluoxetine treatment. The Beck Hopelessness Scale (BHS) was administered at baseline and week 6. A negative binomial regression model via a generalized estimating equation analysis of repeated measures was used to estimate suicide ideation over the 6 weeks of acute treatment from baseline measure of hopelessness. Depression severity and gender were included as covariates in the model. The negative binomial analysis was also conducted separately for the sample of males and females (in a gender-stratified analysis). Mean CDRS-R total scores were 60.30 ± 8.93 at baseline and 34.65 ± 10.41 at week 6. Mean baseline and week 6 BHS scores were 9.57 ± 5.51 and 5.59 ± 5.38, respectively. Per the C-SSRS, 43.04% and 83.54% reported having no suicide ideation at baseline and at week 6, respectively. The analyses revealed that baseline hopelessness was positively related to suicide ideation over treatment (p = .0027), independent of changes in depression severity. This significant finding persisted only for females (p = .0024). These results indicate the importance of early identification of hopelessness. © 2017 The American Association of Suicidology.
Drivers of multidimensional eco-innovation: empirical evidence from the Brazilian industry.
da Silva Rabêlo, Olivan; de Azevedo Melo, Andrea Sales Soares
2018-03-08
The study analyses the relationships between the main drivers of eco-innovation introduced by innovative industries, focused on cooperation strategy. Eco-innovation is analysed by means of a multidimensional identification strategy, showing the relationships between the independent variables and the variable of interest. The literature discussing environmental innovation is different from the one discussing other types of innovation inasmuch as it seeks to grasp its determinants and to mostly highlight the relevance of environmental regulation. The key feature of this paper is that it ascribes special relevance to cooperation strategy with external partners and to the propensity of innovative industry introducing eco-innovation. A sample of 35,060 Brazilian industries were analysed, between 2003 and 2011, by means of Binomial, Multinomial and Ordinal logistic regressions with microdata collected with the research and innovation department (PINTEC) from the Brazilian Institute of Geography and Statistics (Instituto Brasileiro de Geografia e Estatística). The econometric results estimated by the Logit Multinomial method suggest that the cooperation with external partners practiced by innovative industries facilitates the adoption of eco-innovation in dimension 01 with probability of 64.59%, 57.63% in dimension 02 and 81.02% in dimension 03. The data reveal that the higher the degree of eco-innovation complexity, the harder industries seek to obtain cooperation with external partners. When calculating with the Logit Ordinal and Binomial models, cooperation increases the probability that the industry is eco-innovative in 65.09% and 89.34%, respectively. Environmental regulation and innovation in product and information management were also positively correlated as drivers of eco-innovation.
Impact of early childhood caries on oral health-related quality of life of preschool children.
Li, M Y; Zhi, Q H; Zhou, Y; Qiu, R M; Lin, H C
2015-03-01
Child oral health-related quality of life (COHRQoL) has been assessed in developed areas; however, it remains unstudied in mainland China. Studies on COHRQoL would benefit a large number of children in China suffering from oral health problems such as dental caries. This study explored the relationship between COHRQoL and early childhood caries, adjusted by socioeconomic factors, in 3- to 4-year-old children in a region of southern China. In this study, 1062 children aged 3-4 years were recruited by cluster sampling and their oral health statuses were examined by a trained dentist. The Chinese version of the Early Childhood Oral Health Impact Scale (ECOHIS) and questions about the children's socioeconomic conditions were completed by the children's parents. A negative binomial regression analysis was used to assess the prevalence of early childhood caries among the children and its influence on COHRQoL. The total ECOHIS scores of the returned scale sets ranged from 0 to 31, and their average scores was 3.1±5.1. The negative binomial analysis showed that the dmfs indices were significantly associated with the ECOHIS score and subscale scores (P<0.05). The multivariate adjusted model showed that a higher dmft index was associated with greater negative impact on COHRQoL (RR = 1.10; 95% CI = 1.07, 1.13; P < 0.05). However, demographic and socioeconomic factors were not associated with COHRQoL (P>0.05). The severity of early childhood caries has a negative impact on the oral health-related quality of life of preschool children and their parents.