Sample records for sequential importance sampling

  1. Inter-relationships of Salmonella status of flock and grow-out environment at sequential segments in broiler production and processing

    USDA-ARS?s Scientific Manuscript database

    Effective Salmonella control in broilers is important from the standpoint of both consumer protection and industry viability. We investigated associations between Salmonella recovery from different sample types collected at sequential stages of one grow-out from the broiler flock and production env...

  2. Propagating probability distributions of stand variables using sequential Monte Carlo methods

    Treesearch

    Jeffrey H. Gove

    2009-01-01

    A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...

  3. Type I error probability spending for post-market drug and vaccine safety surveillance with binomial data.

    PubMed

    Silva, Ivair R

    2018-01-15

    Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.

  4. RACE/A: an architectural account of the interactions between learning, task control, and retrieval dynamics.

    PubMed

    van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels

    2012-01-01

    This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use RACE/A to model data from two variants of a picture-word interference task in a psychological refractory period design. These models will demonstrate how RACE/A enables interactions between sequential sampling and long-term declarative learning, and between sequential sampling and task control. In a traditional sequential sampling model, the onset of the process within the task is unclear, as is the number of sampling processes. RACE/A provides a theoretical basis for estimating the onset of sequential sampling processes during task execution and allows for easy modeling of multiple sequential sampling processes within a task. Copyright © 2011 Cognitive Science Society, Inc.

  5. Sequential sampling: a novel method in farm animal welfare assessment.

    PubMed

    Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J

    2016-02-01

    Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.

  6. THE LIBERATION OF ARSENOSUGARS FROM MATRIX COMPONENTS IN DIFFICULT TO EXTRACT SEAFOOD SAMPLES UTILIZING TMAOH/ACETIC ACID SEQUENTIALLY IN A TWO-STAGE EXTRACTION PROCESS

    EPA Science Inventory

    Sample extraction is one of the most important steps in arsenic speciation analysis of solid dietary samples. One of the problem areas in this analysis is the partial extraction of arsenicals from seafood samples. The partial extraction allows the toxicity of the extracted arse...

  7. Performance review using sequential sampling and a practice computer.

    PubMed

    Difford, F

    1988-06-01

    The use of sequential sample analysis for repeated performance review is described with examples from several areas of practice. The value of a practice computer in providing a random sample from a complete population, evaluating the parameters of a sequential procedure, and producing a structured worksheet is discussed. It is suggested that sequential analysis has advantages over conventional sampling in the area of performance review in general practice.

  8. Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.

    PubMed

    Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C

    2017-04-01

    The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  9. Estimation After a Group Sequential Trial.

    PubMed

    Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert

    2015-10-01

    Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

  10. Monitoring lipase/esterase activity by stopped flow in a sequential injection analysis system using p-nitrophenyl butyrate.

    PubMed

    Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J

    2015-01-27

    Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05-1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed.

  11. Monitoring Lipase/Esterase Activity by Stopped Flow in a Sequential Injection Analysis System Using p-Nitrophenyl Butyrate

    PubMed Central

    Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J.

    2015-01-01

    Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05–1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed. PMID:25633600

  12. Multi-point objective-oriented sequential sampling strategy for constrained robust design

    NASA Astrophysics Data System (ADS)

    Zhu, Ping; Zhang, Siliang; Chen, Wei

    2015-03-01

    Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.

  13. Sequential biases in accumulating evidence

    PubMed Central

    Huggins, Richard; Dogo, Samson Henry

    2015-01-01

    Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562

  14. Multi-species attributes as the condition for adaptive sampling of rare species using two-stage sequential sampling with an auxiliary variable

    USGS Publications Warehouse

    Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.

    2011-01-01

    Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.

  15. How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation

    ERIC Educational Resources Information Center

    Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard

    2006-01-01

    Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…

  16. One-sided truncated sequential t-test: application to natural resource sampling

    Treesearch

    Gary W. Fowler; William G. O' Regan

    1974-01-01

    A new procedure for constructing one-sided truncated sequential t-tests and its application to natural resource sampling are described. Monte Carlo procedures were used to develop a series of one-sided truncated sequential t-tests and the associated approximations to the operating characteristic and average sample number functions. Different truncation points and...

  17. Identifying High-Rate Flows Based on Sequential Sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Yu; Fang, Binxing; Luo, Hao

    We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.

  18. [Sequential sampling plans to Orthezia praelonga Douglas (Hemiptera: Sternorrhyncha, Ortheziidae) in citrus].

    PubMed

    Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T

    2007-01-01

    The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.

  19. A Fixed-Precision Sequential Sampling Plan for the Potato Tuberworm Moth, Phthorimaea operculella Zeller (Lepidoptera: Gelechidae), on Potato Cultivars.

    PubMed

    Shahbi, M; Rajabpour, A

    2017-08-01

    Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.

  20. Sequential sampling and biorational chemistries for management of lepidopteran pests of vegetable amaranth in the Caribbean.

    PubMed

    Clarke-Harris, Dionne; Fleischer, Shelby J

    2003-06-01

    Although vegetable amaranth, Amaranthus viridis L. and A. dubius Mart. ex Thell., production and economic importance is increasing in diversified peri-urban farms in Jamaica, lepidopteran herbivory is common even during weekly pyrethroid applications. We developed and validated a sampling plan, and investigated insecticides with new modes of action, for a complex of five species (Pyralidae: Spoladea recurvalis (F.), Herpetogramma bipunctalis (F.), Noctuidae: Spodoptera exigua (Hubner), S. frugiperda (J. E. Smith), and S. eridania Stoll). Significant within-plant variation occurred with H. bipunctalis, and a six-leaf sample unit including leaves from the inner and outer whorl was selected to sample all species. Larval counts best fit a negative binomial distribution. We developed a sequential sampling plan using a threshold of one larva per sample unit and the fitted distribution with a k(c) of 0.645. When compared with a fixed plan of 25 plants, sequential sampling recommended the same management decision on 87.5%, additional samples on 9.4%, and gave inaccurate recommendations on 3.1% of 32 farms, while reducing sample size by 46%. Insecticide frequency was reduced 33-60% when management decisions were based on sampled data compared with grower-standards, with no effect on crop damage. Damage remained high or variable (10-46%) with pyrethroid applications. Lepidopteran control was dramatically improved with ecdysone agonists (tebufenozide) or microbial metabolites (spinosyns and emamectin benzoate). This work facilitates resistance management efforts concurrent with the introduction of newer modes of action for lepidopteran control in leafy vegetable production in the Caribbean.

  1. Multilevel Mixture Kalman Filter

    NASA Astrophysics Data System (ADS)

    Guo, Dong; Wang, Xiaodong; Chen, Rong

    2004-12-01

    The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.

  2. An adaptive two-stage sequential design for sampling rare and clustered populations

    USGS Publications Warehouse

    Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.

    2008-01-01

    How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.

  3. Evaluation of sequential extraction procedures for soluble and insoluble hexavalent chromium compounds in workplace air samples.

    PubMed

    Ashley, Kevin; Applegate, Gregory T; Marcy, A Dale; Drake, Pamela L; Pierce, Paul A; Carabin, Nathalie; Demange, Martine

    2009-02-01

    Because toxicities may differ for Cr(VI) compounds of varying solubility, some countries and organizations have promulgated different occupational exposure limits (OELs) for soluble and insoluble hexavalent chromium (Cr(VI)) compounds, and analytical methods are needed to determine these species in workplace air samples. To address this need, international standard methods ASTM D6832 and ISO 16740 have been published that describe sequential extraction techniques for soluble and insoluble Cr(VI) in samples collected from occupational settings. However, no published performance data were previously available for these Cr(VI) sequential extraction procedures. In this work, the sequential extraction methods outlined in the relevant international standards were investigated. The procedures tested involved the use of either deionized water or an ammonium sulfate/ammonium hydroxide buffer solution to target soluble Cr(VI) species. This was followed by extraction in a sodium carbonate/sodium hydroxide buffer solution to dissolve insoluble Cr(VI) compounds. Three-step sequential extraction with (1) water, (2) sulfate buffer and (3) carbonate buffer was also investigated. Sequential extractions were carried out on spiked samples of soluble, sparingly soluble and insoluble Cr(VI) compounds, and analyses were then generally carried out by using the diphenylcarbazide method. Similar experiments were performed on paint pigment samples and on airborne particulate filter samples collected from stainless steel welding. Potential interferences from soluble and insoluble Cr(III) compounds, as well as from Fe(II), were investigated. Interferences from Cr(III) species were generally absent, while the presence of Fe(II) resulted in low Cr(VI) recoveries. Two-step sequential extraction of spiked samples with (first) either water or sulfate buffer, and then carbonate buffer, yielded quantitative recoveries of soluble Cr(VI) and insoluble Cr(VI), respectively. Three-step sequential extraction gave excessively high recoveries of soluble Cr(VI), low recoveries of sparingly soluble Cr(VI), and quantitative recoveries of insoluble Cr(VI). Experiments on paint pigment samples using two-step extraction with water and carbonate buffer yielded varying percentages of relative fractions of soluble and insoluble Cr(VI). Sequential extractions of stainless steel welding fume air filter samples demonstrated the predominance of soluble Cr(VI) compounds in such samples. The performance data obtained in this work support the Cr(VI) sequential extraction procedures described in the international standards.

  4. A Bayesian sequential design using alpha spending function to control type I error.

    PubMed

    Zhu, Han; Yu, Qingzhao

    2017-10-01

    We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.

  5. Sequential sampling of ribes populations in the control of white pine blister rust (Cronartium ribicola Fischer) in California

    Treesearch

    Harold R. Offord

    1966-01-01

    Sequential sampling based on a negative binomial distribution of ribes populations required less than half the time taken by regular systematic line transect sampling in a comparison test. It gave the same control decision as the regular method in 9 of 13 field trials. A computer program that permits sequential plans to be built readily for other white pine regions is...

  6. The subtyping of primary aldosteronism by adrenal vein sampling: sequential blood sampling causes factitious lateralization.

    PubMed

    Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo

    2018-02-01

    The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.

  7. Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates

    PubMed Central

    Bartroff, Jay; Song, Jinlin

    2014-01-01

    This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948

  8. Exact Tests for the Rasch Model via Sequential Importance Sampling

    ERIC Educational Resources Information Center

    Chen, Yuguo; Small, Dylan

    2005-01-01

    Rasch proposed an exact conditional inference approach to testing his model but never implemented it because it involves the calculation of a complicated probability. This paper furthers Rasch's approach by (1) providing an efficient Monte Carlo methodology for accurately approximating the required probability and (2) illustrating the usefulness…

  9. Factors affecting ANKOM™ fiber analysis of forage and browse varying in condensed tannin concentration.

    PubMed

    Terrill, Thomas H; Wolfe, Richard M; Muir, James P

    2010-12-01

    Browse species containing condensed tannins (CTs) are an important source of nutrition for grazing/browsing livestock and wildlife in many parts of the world, but information on fiber concentration and CT-fiber interactions for these plants is lacking. Ten forage or browse species with a range of CT concentrations were oven dried and freeze dried and then analyzed for ash-corrected neutral detergent fiber (NDFom) and corrected acid detergent fiber (ADFom) using separate samples (ADFSEP) and sequential NDF-ADF analysis (ADFSEQ) with the ANKOM™ fiber analysis system. The ADFSEP and ADFSEQ residues were then analyzed for nitrogen (N) concentration. Oven drying increased (P < 0.05) fiber concentrations with some species, but not with others. For high-CT forage and browse species, ADFSEP concentrations were greater (P < 0.05) than NDFom values and approximately double the ADFSEQ values. Nitrogen concentration was greater (P < 0.05) in ADFSEP than ADFSEQ residues, likely due to precipitation with CTs. Sequential NDF-ADF analysis gave more realistic values and appeared to remove most of the fiber residue contaminants in CT forage samples. Freeze drying samples with sequential NDF-ADF analysis is recommended in the ANKOM™ fiber analysis system with CT-containing forage and browse species. Copyright © 2010 Society of Chemical Industry.

  10. A Simulation Approach to Assessing Sampling Strategies for Insect Pests: An Example with the Balsam Gall Midge

    PubMed Central

    Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.

    2013-01-01

    Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556

  11. Orphan therapies: making best use of postmarket data.

    PubMed

    Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling

    2014-08-01

    Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.

  12. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    PubMed Central

    Cao, Youfang; Liang, Jie

    2013-01-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape. PMID:23862966

  13. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method

    NASA Astrophysics Data System (ADS)

    Cao, Youfang; Liang, Jie

    2013-07-01

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  14. Adaptively biased sequential importance sampling for rare events in reaction networks with comparison to exact solutions from finite buffer dCME method.

    PubMed

    Cao, Youfang; Liang, Jie

    2013-07-14

    Critical events that occur rarely in biological processes are of great importance, but are challenging to study using Monte Carlo simulation. By introducing biases to reaction selection and reaction rates, weighted stochastic simulation algorithms based on importance sampling allow rare events to be sampled more effectively. However, existing methods do not address the important issue of barrier crossing, which often arises from multistable networks and systems with complex probability landscape. In addition, the proliferation of parameters and the associated computing cost pose significant problems. Here we introduce a general theoretical framework for obtaining optimized biases in sampling individual reactions for estimating probabilities of rare events. We further describe a practical algorithm called adaptively biased sequential importance sampling (ABSIS) method for efficient probability estimation. By adopting a look-ahead strategy and by enumerating short paths from the current state, we estimate the reaction-specific and state-specific forward and backward moving probabilities of the system, which are then used to bias reaction selections. The ABSIS algorithm can automatically detect barrier-crossing regions, and can adjust bias adaptively at different steps of the sampling process, with bias determined by the outcome of exhaustively generated short paths. In addition, there are only two bias parameters to be determined, regardless of the number of the reactions and the complexity of the network. We have applied the ABSIS method to four biochemical networks: the birth-death process, the reversible isomerization, the bistable Schlögl model, and the enzymatic futile cycle model. For comparison, we have also applied the finite buffer discrete chemical master equation (dCME) method recently developed to obtain exact numerical solutions of the underlying discrete chemical master equations of these problems. This allows us to assess sampling results objectively by comparing simulation results with true answers. Overall, ABSIS can accurately and efficiently estimate rare event probabilities for all examples, often with smaller variance than other importance sampling algorithms. The ABSIS method is general and can be applied to study rare events of other stochastic networks with complex probability landscape.

  15. Asymptotic Properties of the Sequential Empirical ROC, PPV and NPV Curves Under Case-Control Sampling.

    PubMed

    Koopmeiners, Joseph S; Feng, Ziding

    2011-01-01

    The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.

  16. Asymptotic Properties of the Sequential Empirical ROC, PPV and NPV Curves Under Case-Control Sampling

    PubMed Central

    Koopmeiners, Joseph S.; Feng, Ziding

    2013-01-01

    The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313

  17. Sequential extraction protocol for organic matter from soils and sediments using high resolution mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tfaily, Malak M.; Chu, Rosalie K.; Toyoda, Jason

    A vast number of organic compounds are present in soil organic matter (SOM) and play an important role in the terrestrial carbon cycle, facilitate interactions between organisms, and represent a sink for atmospheric CO2. The diversity of different SOM compounds and their molecular characteristics is a function of the organic source material and biogeochemical history. By understanding how SOM composition changes with sources and the processes by which it is biogeochemically altered in different terrestrial ecosystems, it may be possible to predict nutrient and carbon cycling, response to system perturbations, and impact of climate change will have on SOM composition.more » In this study, a sequential chemical extraction procedure was developed to reveal the diversity of organic matter (OM) in different ecosystems and was compared to the previously published protocol using parallel solvent extraction (PSE). We compared six extraction methods using three sample types, peat soil, spruce forest soil and river sediment, so as to select the best method for extracting a representative fraction of organic matter from soils and sediments from a wide range of ecosystems. We estimated the extraction yield of dissolved organic carbon (DOC) by total organic carbon analysis, and measured the composition of extracted OM using high resolution mass spectrometry. This study showed that OM composition depends primarily on soil and sediment characteristics. Two sequential extraction protocols, progressing from polar to non-polar solvents, were found to provide the highest number and diversity of organic compounds extracted from the soil and sediments. Water (H2O) is the first solvent used for both protocols followed by either co-extraction with methanol-chloroform (MeOH-CHCl3) mixture, or acetonitrile (ACN) and CHCl3 sequentially. The sequential extraction protocol developed in this study offers improved sensitivity, and requires less sample compared to the PSE workflow where a new sample is used for each solvent type. Furthermore, a comparison of SOM composition from the different sample types revealed that our sequential protocol allows for ecosystem comparisons based on the diversity of compounds present, which in turn could provide new insights about source and processing of organic compounds in different soil and sediment types.« less

  18. Sequential extraction protocol for organic matter from soils and sediments using high resolution mass spectrometry.

    PubMed

    Tfaily, Malak M; Chu, Rosalie K; Toyoda, Jason; Tolić, Nikola; Robinson, Errol W; Paša-Tolić, Ljiljana; Hess, Nancy J

    2017-06-15

    A vast number of organic compounds are present in soil organic matter (SOM) and play an important role in the terrestrial carbon cycle, facilitate interactions between organisms, and represent a sink for atmospheric CO 2 . The diversity of different SOM compounds and their molecular characteristics is a function of the organic source material and biogeochemical history. By understanding how SOM composition changes with sources and the processes by which it is biogeochemically altered in different terrestrial ecosystems, it may be possible to predict nutrient and carbon cycling, response to system perturbations, and impact of climate change will have on SOM composition. In this study, a sequential chemical extraction procedure was developed to reveal the diversity of organic matter (OM) in different ecosystems and was compared to the previously published protocol using parallel solvent extraction (PSE). We compared six extraction methods using three sample types, peat soil, spruce forest soil and river sediment, so as to select the best method for extracting a representative fraction of organic matter from soils and sediments from a wide range of ecosystems. We estimated the extraction yield of dissolved organic carbon (DOC) by total organic carbon analysis, and measured the composition of extracted OM using high resolution mass spectrometry. This study showed that OM composition depends primarily on soil and sediment characteristics. Two sequential extraction protocols, progressing from polar to non-polar solvents, were found to provide the highest number and diversity of organic compounds extracted from the soil and sediments. Water (H 2 O) is the first solvent used for both protocols followed by either co-extraction with methanol-chloroform (MeOH-CHCl 3 ) mixture, or acetonitrile (ACN) and CHCl 3 sequentially. The sequential extraction protocol developed in this study offers improved sensitivity, and requires less sample compared to the PSE workflow where a new sample is used for each solvent type. Furthermore, a comparison of SOM composition from the different sample types revealed that our sequential protocol allows for ecosystem comparisons based on the diversity of compounds present, which in turn could provide new insights about source and processing of organic compounds in different soil and sediment types. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Characteristics of sequential swallowing of liquids in young and elderly adults: an integrative review.

    PubMed

    Veiga, Helena Perrut; Bianchini, Esther Mandelbaum Gonçalves

    2012-01-01

    To perform an integrative review of studies on liquid sequential swallowing, by characterizing the methodology of the studies and the most important findings in young and elderly adults. Review of the literature written in English and Portuguese on PubMed, LILACS, SciELO and MEDLINE databases, within the past twenty years, available fully, using the following uniterms: sequential swallowing, swallowing, dysphagia, cup, straw, in various combinations. Research articles with a methodological approach on the characterization of liquid sequential swallowing by young and/or elderly adults, regardless of health condition, excluding studies involving only the esophageal phase. The following research indicators were applied: objectives, number and gender of participants; age group; amount of liquid offered; intake instruction; utensil used, methods and main findings. 18 studies met the established criteria. The articles were categorized according to the sample characterization and the methodology on volume intake, utensil used and types of exams. Most studies investigated only healthy individuals, with no swallowing complaints. Subjects were given different instructions as to the intake of all the volume: usual manner, continually, as rapidly as possible. The findings about the characterization of sequential swallowing were varied and described in accordance with the objectives of each study. It found great variability in the methodology employed to characterize the sequential swallowing. Some findings are not comparable, and sequential swallowing is not studied in most swallowing protocols, without consensus on the influence of the utensil.

  20. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    NASA Astrophysics Data System (ADS)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  1. Verification of hypergraph states

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki; Takeuchi, Yuki; Hayashi, Masahito

    2017-12-01

    Hypergraph states are generalizations of graph states where controlled-Z gates on edges are replaced with generalized controlled-Z gates on hyperedges. Hypergraph states have several advantages over graph states. For example, certain hypergraph states, such as the Union Jack states, are universal resource states for measurement-based quantum computing with only Pauli measurements, while graph state measurement-based quantum computing needs non-Clifford basis measurements. Furthermore, it is impossible to classically efficiently sample measurement results on hypergraph states unless the polynomial hierarchy collapses to the third level. Although several protocols have been proposed to verify graph states with only sequential single-qubit Pauli measurements, there was no verification method for hypergraph states. In this paper, we propose a method for verifying a certain class of hypergraph states with only sequential single-qubit Pauli measurements. Importantly, no i.i.d. property of samples is assumed in our protocol: any artificial entanglement among samples cannot fool the verifier. As applications of our protocol, we consider verified blind quantum computing with hypergraph states, and quantum computational supremacy demonstrations with hypergraph states.

  2. Explanation of Normative Declines in Parents' Knowledge about Their Adolescent Children

    ERIC Educational Resources Information Center

    Masche, J. Gowert

    2010-01-01

    This study aimed to explain why parental knowledge of adolescents' whereabouts declines with age. Such an investigation is important because previous studies have established an association between behavior problems and low levels of parental knowledge. A time-sequential sample comprising 2415 adolescents aged 13-18 years was investigated on five…

  3. A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.

    PubMed

    Yu, Qingzhao; Zhu, Lin; Zhu, Han

    2017-11-01

    Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Sequential elution process

    DOEpatents

    Kingsley, I.S.

    1987-01-06

    A process and apparatus are disclosed for the separation of complex mixtures of carbonaceous material by sequential elution with successively stronger solvents. In the process, a column containing glass beads is maintained in a fluidized state by a rapidly flowing stream of a weak solvent, and the sample is injected into this flowing stream such that a portion of the sample is dissolved therein and the remainder of the sample is precipitated therein and collected as a uniform deposit on the glass beads. Successively stronger solvents are then passed through the column to sequentially elute less soluble materials. 1 fig.

  5. Backloading in the sequential lineup prevents within-lineup criterion shifts that undermine eyewitness identification performance.

    PubMed

    Horry, Ruth; Palmer, Matthew A; Brewer, Neil

    2012-12-01

    Although the sequential lineup has been proposed as a means of protecting innocent suspects from mistaken identification, little is known about the importance of various aspects of the procedure. One potentially important detail is that witnesses should not know how many people are in the lineup. This is sometimes achieved by backloading the lineup so that witnesses believe that the lineup includes more photographs than it actually does. This study aimed to investigate the effect of backloading on witness decision making. A large sample (N = 833) of community-dwelling adults viewed a live "culprit" and then saw a target-present or target-absent sequential lineup. All lineups included 6 individuals, but the participants were told that the lineup included 6 photographs (nonbackloaded condition) or that the lineup included 12 or 30 photographs (backloaded conditions). The suspect either appeared early (Position 2) or late (Position 6) in the lineup. Innocent suspects placed in Position 6 were chosen more frequently by participants in the nonbackloaded condition than in either backloaded condition. Additionally, when the lineup was not backloaded, foil identification rates increased from Positions 3 to 5, suggesting a gradually shifting response criterion. The results suggest that backloading encourages participants to adopt a more conservative response criterion, and it reduces or eliminates the tendency for the criterion to become more lenient over the course of the lineup. The results underscore the absolute importance of ensuring that witnesses who view sequential lineups are unaware of the number of individuals to be seen.

  6. Fixed-Precision Sequential Sampling Plans for Estimating Alfalfa Caterpillar, Colias lesbia, Egg Density in Alfalfa, Medicago sativa, Fields in Córdoba, Argentina

    PubMed Central

    Serra, Gerardo V.; Porta, Norma C. La; Avalos, Susana; Mazzuferi, Vilma

    2013-01-01

    The alfalfa caterpillar, Colias lesbia (Fabricius) (Lepidoptera: Pieridae), is a major pest of alfalfa, Medicago sativa L. (Fabales: Fabaceae), crops in Argentina. Its management is based mainly on chemical control of larvae whenever the larvae exceed the action threshold. To develop and validate fixed-precision sequential sampling plans, an intensive sampling programme for C. lesbia eggs was carried out in two alfalfa plots located in the Province of Córdoba, Argentina, from 1999 to 2002. Using Resampling for Validation of Sampling Plans software, 12 additional independent data sets were used to validate the sequential sampling plan with precision levels of 0.10 and 0.25 (SE/mean), respectively. For a range of mean densities of 0.10 to 8.35 eggs/sample, an average sample size of only 27 and 26 sample units was required to achieve a desired precision level of 0.25 for the sampling plans of Green and Kuno, respectively. As the precision level was increased to 0.10, average sample size increased to 161 and 157 sample units for the sampling plans of Green and Kuno, respectively. We recommend using Green's sequential sampling plan because it is less sensitive to changes in egg density. These sampling plans are a valuable tool for researchers to study population dynamics and to evaluate integrated pest management strategies. PMID:23909840

  7. Linear Algebra and Sequential Importance Sampling for Network Reliability

    DTIC Science & Technology

    2011-12-01

    first test case is an Erdős- Renyi graph with 100 vertices and 150 edges. Figure 1 depicts the relative variance of the three Algorithms: Algorithm TOP...e va ria nc e Figure 1: Relative variance of various algorithms on Erdős Renyi graph, 100 vertices 250 edges. Key: Solid = TOP-DOWN algorithm

  8. Importance and Effectiveness of Student Health Services at a South Texas University

    ERIC Educational Resources Information Center

    McCaig, Marilyn M.

    2013-01-01

    The study examined the health needs of students at a south Texas university and documented the utility of the student health center. The descriptive study employed a mixed methods explanatory sequential design (ESD). The non-probability sample consisted of 140 students who utilized the university's health center during the period of March 23-30,…

  9. Sequential air sampler system : its use by the Virginia Department of Highways & Transportation.

    DOT National Transportation Integrated Search

    1975-01-01

    The Department of Highways & Transportation needs an economical and efficient air quality sampling system for meeting requirements on air monitoring for proposed projects located In critical areas. Two sequential air sampling systems, the ERAI and th...

  10. Manganese speciation of laboratory-generated welding fumes

    PubMed Central

    Andrews, Ronnee N.; Keane, Michael; Hanley, Kevin W.; Feng, H. Amy; Ashley, Kevin

    2015-01-01

    The objective of this laboratory study was to identify and measure manganese (Mn) fractions in chamber-generated welding fumes (WF) and to evaluate and compare the results from a sequential extraction procedure for Mn fractions with that of an acid digestion procedure for measurement of total, elemental Mn. To prepare Mn-containing particulate matter from representative welding processes, a welding system was operated in short circuit gas metal arc welding (GMAW) mode using both stainless steel (SS) and mild carbon steel (MCS) and also with flux cored arc welding (FCAW) and shielded metal arc welding (SMAW) using MCS. Generated WF samples were collected onto polycarbonate filters before homogenization, weighing and storage in scintillation vials. The extraction procedure consisted of four sequential steps to measure various Mn fractions based upon selective solubility: (1) soluble Mn dissolved in 0.01 M ammonium acetate; (2) Mn (0,II) dissolved in 25 % (v/v) acetic acid; (3) Mn (III,IV) dissolved in 0.5% (w/v) hydroxylamine hydrochloride in 25% (v/v) acetic acid; and (4) insoluble Mn extracted with concentrated hydrochloric and nitric acids. After sample treatment, the four fractions were analyzed for Mn by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). WF from GMAW and FCAW showed similar distributions of Mn species, with the largest concentrations of Mn detected in the Mn (0,II) and insoluble Mn fractions. On the other hand, the majority of the Mn content of SMAW fume was detected as Mn (III,IV). Although the concentration of Mn measured from summation of the four sequential steps was statistically significantly different from that measured from the hot block dissolution method for total Mn, the difference is small enough to be of no practical importance for industrial hygiene air samples, and either method may be used for Mn measurement. The sequential extraction method provides valuable information about the oxidation state of Mn in samples and allows for comparison to results from previous work and from total Mn dissolution methods. PMID:26345630

  11. Manganese speciation of laboratory-generated welding fumes.

    PubMed

    Andrews, Ronnee N; Keane, Michael; Hanley, Kevin W; Feng, H Amy; Ashley, Kevin

    The objective of this laboratory study was to identify and measure manganese (Mn) fractions in chamber-generated welding fumes (WF) and to evaluate and compare the results from a sequential extraction procedure for Mn fractions with that of an acid digestion procedure for measurement of total, elemental Mn. To prepare Mn-containing particulate matter from representative welding processes, a welding system was operated in short circuit gas metal arc welding (GMAW) mode using both stainless steel (SS) and mild carbon steel (MCS) and also with flux cored arc welding (FCAW) and shielded metal arc welding (SMAW) using MCS. Generated WF samples were collected onto polycarbonate filters before homogenization, weighing and storage in scintillation vials. The extraction procedure consisted of four sequential steps to measure various Mn fractions based upon selective solubility: (1) soluble Mn dissolved in 0.01 M ammonium acetate; (2) Mn (0,II) dissolved in 25 % (v/v) acetic acid; (3) Mn (III,IV) dissolved in 0.5% (w/v) hydroxylamine hydrochloride in 25% (v/v) acetic acid; and (4) insoluble Mn extracted with concentrated hydrochloric and nitric acids. After sample treatment, the four fractions were analyzed for Mn by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). WF from GMAW and FCAW showed similar distributions of Mn species, with the largest concentrations of Mn detected in the Mn (0,II) and insoluble Mn fractions. On the other hand, the majority of the Mn content of SMAW fume was detected as Mn (III,IV). Although the concentration of Mn measured from summation of the four sequential steps was statistically significantly different from that measured from the hot block dissolution method for total Mn, the difference is small enough to be of no practical importance for industrial hygiene air samples, and either method may be used for Mn measurement. The sequential extraction method provides valuable information about the oxidation state of Mn in samples and allows for comparison to results from previous work and from total Mn dissolution methods.

  12. Spatial Patterns and Sequential Sampling Plans for Predators of Aphis glycines (Hemiptera: Aphididae) in Minnesota Soybean.

    PubMed

    Tran, Anh K; Koch, Robert L

    2017-06-01

    The soybean aphid, Aphis glycines Matsumura, is an economically important soybean pest. Many studies have demonstrated that predatory insects are important in suppressing A. glycines population growth. However, to improve the utilization of predators in A. glycines management, sampling plans need to be developed and validated for predators. Aphid predators were sampled in soybean fields near Rosemount, Minnesota, from 2006-2007 and 2013-2015 with sample sizes of 20-80 plants. Sampling plans were developed for Orius insidiosus (Say), Harmonia axyridis (Pallas), and all aphidophagous Coccinellidae species combined. Taylor's power law parameters from the regression of log variance versus log mean suggested aggregated spatial patterns for immature and adult stages combined for O. insidiosus, H. axyridis, and Coccinellidae in soybean fields. Using the parameters from Taylor's power law and Green's method, sequential fixed-precision sampling plans were developed to estimate the density for each predator taxon at desired precision levels of 0.10 and 0.25. To achieve a desired precision of 0.10 and 0.25, the average sample number (ASN) ranged from 398-713 and 64-108 soybean plants, respectively, for all species. Resulting ASNs were relatively large and assumed impractical for most purposes; therefore, the desired precision levels were adjusted to determine the level of precision associated with a more practical ASN. Final analysis indicated an ASN of 38 soybean plants provided precision of 0.32-0.40 for the predators. Development of sampling plans should provide guidance for improved estimation of predator densities for A. glycines pest management programs and for research purposes. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Group Sequential Testing of the Predictive Accuracy of a Continuous Biomarker with Unknown Prevalence

    PubMed Central

    Koopmeiners, Joseph S.; Feng, Ziding

    2015-01-01

    Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180

  14. Sequential search leads to faster, more efficient fragment-based de novo protein structure prediction.

    PubMed

    de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M

    2018-04-01

    Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.

  15. Sequential enzymatic derivatization coupled with online microdialysis sampling for simultaneous profiling of mouse tumor extracellular hydrogen peroxide, lactate, and glucose.

    PubMed

    Su, Cheng-Kuan; Tseng, Po-Jen; Chiu, Hsien-Ting; Del Vall, Andrea; Huang, Yu-Fen; Sun, Yuh-Chang

    2017-03-01

    Probing tumor extracellular metabolites is a vitally important issue in current cancer biology. In this study an analytical system was constructed for the in vivo monitoring of mouse tumor extracellular hydrogen peroxide (H 2 O 2 ), lactate, and glucose by means of microdialysis (MD) sampling and fluorescence determination in conjunction with a smart sequential enzymatic derivatization scheme-involving a loading sequence of fluorogenic reagent/horseradish peroxidase, microdialysate, lactate oxidase, pyruvate, and glucose oxidase-for step-by-step determination of sampled H 2 O 2 , lactate, and glucose in mouse tumor microdialysate. After optimization of the overall experimental parameters, the system's detection limit reached as low as 0.002 mM for H 2 O 2 , 0.058 mM for lactate, and 0.055 mM for glucose, based on 3 μL of microdialysate, suggesting great potential for determining tumor extracellular concentrations of lactate and glucose. Spike analyses of offline-collected mouse tumor microdialysate and monitoring of the basal concentrations of mouse tumor extracellular H 2 O 2 , lactate, and glucose, as well as those after imparting metabolic disturbance through intra-tumor administration of a glucose solution through a prior-implanted cannula, were conducted to demonstrate the system's applicability. Our results evidently indicate that hyphenation of an MD sampling device with an optimized sequential enzymatic derivatization scheme and a fluorescence spectrometer can be used successfully for multi-analyte monitoring of tumor extracellular metabolites in living animals. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. An apparatus for sequentially combining microvolumes of reagents by infrasonic mixing.

    PubMed

    Camien, M N; Warner, R C

    1984-05-01

    A method employing high-speed infrasonic mixing for obtaining timed samples for following the progress of a moderately rapid chemical reaction is described. Drops of 10 to 50 microliter each of two reagents are mixed to initiate the reaction, followed, after a measured time interval, by mixing with a drop of a third reagent to quench the reaction. The method was developed for measuring the rate of denaturation of covalently closed, circular DNA in NaOH at several temperatures. For this purpose the timed samples were analyzed by analytical ultracentrifugation. The apparatus was tested by determination of the rate of hydrolysis of 2,4-dinitrophenyl acetate in an alkaline buffer. The important characteristics of the method are (i) it requires very small volumes of sample and reagents; (ii) the components of the reaction mixture are pre-equilibrated and mixed with no transfer outside the prescribed constant temperature environment; (iii) the mixing is very rapid; and (iv) satisfactorily precise measurements of relatively short time intervals (approximately 2 sec minimum) between sequential mixings of the components are readily obtainable.

  17. Proposed hardware architectures of particle filter for object tracking

    NASA Astrophysics Data System (ADS)

    Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED

    2012-12-01

    In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.

  18. The Effect of Sequential Dependence on the Sampling Distributions of KR-20, KR-21, and Split-Halves Reliabilities.

    ERIC Educational Resources Information Center

    Sullins, Walter L.

    Five-hundred dichotomously scored response patterns were generated with sequentially independent (SI) items and 500 with dependent (SD) items for each of thirty-six combinations of sampling parameters (i.e., three test lengths, three sample sizes, and four item difficulty distributions). KR-20, KR-21, and Split-Half (S-H) reliabilities were…

  19. Implementing reduced-risk integrated pest management in fresh-market cabbage: influence of sampling parameters, and validation of binomial sequential sampling plans for the cabbage looper (Lepidoptera Noctuidae).

    PubMed

    Burkness, Eric C; Hutchison, W D

    2009-10-01

    Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.

  20. On-line sequential injection-capillary electrophoresis for near-real-time monitoring of extracellular lactate in cell culture flasks.

    PubMed

    Alhusban, Ala A; Gaudry, Adam J; Breadmore, Michael C; Gueven, Nuri; Guijt, Rosanne M

    2014-01-03

    Cell culture has replaced many in vivo studies because of ethical and regulatory measures as well as the possibility of increased throughput. Analytical assays to determine (bio)chemical changes are often based on end-point measurements rather than on a series of sequential determinations. The purpose of this work is to develop an analytical system for monitoring cell culture based on sequential injection-capillary electrophoresis (SI-CE) with capacitively coupled contactless conductivity detection (C(4)D). The system was applied for monitoring lactate production, an important metabolic indicator, during mammalian cell culture. Using a background electrolyte consisting of 25mM tris(hydroxymethyl)aminomethane, 35mM cyclohexyl-2-aminoethanesulfonic acid with 0.02% poly(ethyleneimine) (PEI) at pH 8.65 and a multilayer polymer coated capillary, lactate could be resolved from other compounds present in media with relative standard deviations 0.07% for intraday electrophoretic mobility and an analysis time of less than 10min. Using the human embryonic kidney cell line HEK293, lactate concentrations in the cell culture medium were measured every 20min over 3 days, requiring only 8.73μL of sample per run. Combining simplicity, portability, automation, high sample throughput, low limits of detection, low sample consumption and the ability to up- and outscale, this new methodology represents a promising technique for near real-time monitoring of chemical changes in diverse cell culture applications. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Increasing efficiency of preclinical research by group sequential designs

    PubMed Central

    Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich

    2017-01-01

    Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371

  2. Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.

    PubMed

    Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric

    2017-02-01

    Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.

  3. Sequential time interleaved random equivalent sampling for repetitive signal.

    PubMed

    Zhao, Yijiu; Liu, Jingjing

    2016-12-01

    Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.

  4. Group-sequential three-arm noninferiority clinical trial designs

    PubMed Central

    Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko

    2016-01-01

    We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481

  5. Particle size and chemical control of heavy metals in bed sediment from the Rouge River, southeast Michigan

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, K.S.; Cauvet, D.; Lybeer, M.

    1999-04-01

    Anthropogenic activities related to 100 years of industrialization in the metropolitan Detroit area have significantly enriched the bed sediment of the lower reaches of the Rouge River in Cr, Cu, Fe, Ni, Pb, and Zn. These enriched elements, which may represent a threat to biota, are predominantly present in sequentially extracted reducible and oxidizable chemical phases with small contributions from residual phases. In size-fractionated samples trace metal concentrations generally increase with decreasing particle size, with the greatest contribution to this increase from the oxidizable phase. Experimental results obtained on replicate samples of river sediment demonstrate that the accuracy of themore » sequential extraction procedure, evaluated by comparing the sums of the three individual fractions, is generally better than 10%. Oxidizable and reducible phases therefore constitute important sources of potentially available heavy metals that need to be explicitly considered when evaluating sediment and water quality impacts on biota.« less

  6. The Importance of Practice in the Development of Statistics.

    DTIC Science & Technology

    1983-01-01

    RESOLUTION TEST CHART NATIONAL BUREAU OIF STANDARDS 1963 -A NRC Technical Summary Report #2471 C THE IMORTANCE OF PRACTICE IN to THE DEVELOPMENT OF STATISTICS...component analysis, bioassay, limits for a ratio, quality control, sampling inspection, non-parametric tests , transformation theory, ARIMA time series...models, sequential tests , cumulative sum charts, data analysis plotting techniques, and a resolution of the Bayes - frequentist controversy. It appears

  7. Assessment of in vitro cyto/genotoxicity of sequentially treated electroplating effluent on the human hepatocarcinoma HuH-7 cell line.

    PubMed

    Naik, Umesh Chandra; Das, Mihir Tanay; Sauran, Swati; Thakur, Indu Shekhar

    2014-03-01

    The present study compares in vitro toxicity of electroplating effluent after the batch treatment process with that obtained after the sequential treatment process. Activated charcoal prepared from sugarcane bagasse through chemical carbonization, and tolerant indigenous bacteria, Bacillus sp. strain IST105, were used individually and sequentially for the treatment of electroplating effluent. The sequential treatment involving activated charcoal followed by bacterial treatment removed 99% of Cr(VI) compared with the batch processes, which removed 40% (charcoal) and 75% (bacteria), respectively. Post-treatment in vitro cyto/genotoxicity was evaluated by the MTT test and the comet assay in human HuH-7 hepatocarcinoma cells. The sequentially treated sample showed an increase in LC50 value with a 6-fold decrease in comet-assay DNA migration compared with that of untreated samples. A significant decrease in DNA migration and an increase in LC50 value of treated effluent proved the higher effectiveness of the sequential treatment process over the individual batch processes. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. An on-line potentiometric sequential injection titration process analyser for the determination of acetic acid.

    PubMed

    van Staden, J F; Mashamba, Mulalo G; Stefan, Raluca I

    2002-09-01

    An on-line potentiometric sequential injection titration process analyser for the determination of acetic acid is proposed. A solution of 0.1 mol L(-1) sodium chloride is used as carrier. Titration is achieved by aspirating acetic acid samples between two strong base-zone volumes into a holding coil and by channelling the stack of well-defined zones with flow reversal through a reaction coil to a potentiometric sensor where the peak widths were measured. A linear relationship between peak width and logarithm of the acid concentration was obtained in the range 1-9 g/100 mL. Vinegar samples were analysed without any sample pre-treatment. The method has a relative standard deviation of 0.4% with a sample frequency of 28 samples per hour. The results revealed good agreement between the proposed sequential injection and an automated batch titration method.

  9. Sequential extraction of metals from mixed and digested sludge from aerobic WWTPs sited in the south of Spain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alonso, E.; Aparicio, I.; Santos, J.L.

    2009-01-15

    The content of heavy metals is the major limitation to the application of sewage sludge in soil. However, assessment of the pollution by total metal determination does not reveal the true environmental impact. It is necessary to apply sequential extraction techniques to obtain suitable information about their bioavailability or toxicity. In this paper, sequential extraction of metals from sludge before and after aerobic digestion was applied to sludge from five WWTPs in southern Spain to obtain information about the influence of the digestion treatment in the concentration of the metals. The percentage of each metal as residual, oxidizable, reducible andmore » exchangeable form was calculated. For this purpose, sludge samples were collected from two different points of the plants, namely, sludge from the mixture (primary and secondary sludge) tank (mixed sludge, MS) and the digested-dewatered sludge (final sludge, FS). Heavy metals, Al, Cd, Co, Cr, Cu, Fe, Hg, Mn, Mo, Ni, Pb, Ti and Zn, were extracted following the sequential extraction scheme proposed by the Standards, Measurements and Testing Programme of the European Commission and determined by inductively-coupled plasma atomic emission spectrometry. The total concentration of heavy metals in the measured sludge samples did not exceed the limits set out by European legislation and were mainly associated with the two less-available fractions (27-28% as oxidizable metal and 44-50% as residual metal). However, metals as Co (64% in MS and 52% in FS samples), Mn (82% in MS and 79% in FS), Ni (32% in MS and 26% in FS) and Zn (79% in MS and 62% in FS) were present at important percentages as available forms. In addition, results showed a clear increase of the concentration of metals after sludge treatment in the proportion of two less-available fractions (oxidizable and residual metal)« less

  10. Sequential extraction of metals from mixed and digested sludge from aerobic WWTPs sited in the south of Spain.

    PubMed

    Alonso, E; Aparicio, I; Santos, J L; Villar, P; Santos, A

    2009-01-01

    The content of heavy metals is the major limitation to the application of sewage sludge in soil. However, assessment of the pollution by total metal determination does not reveal the true environmental impact. It is necessary to apply sequential extraction techniques to obtain suitable information about their bioavailability or toxicity. In this paper, sequential extraction of metals from sludge before and after aerobic digestion was applied to sludge from five WWTPs in southern Spain to obtain information about the influence of the digestion treatment in the concentration of the metals. The percentage of each metal as residual, oxidizable, reducible and exchangeable form was calculated. For this purpose, sludge samples were collected from two different points of the plants, namely, sludge from the mixture (primary and secondary sludge) tank (mixed sludge, MS) and the digested-dewatered sludge (final sludge, FS). Heavy metals, Al, Cd, Co, Cr, Cu, Fe, Hg, Mn, Mo, Ni, Pb, Ti and Zn, were extracted following the sequential extraction scheme proposed by the Standards, Measurements and Testing Programme of the European Commission and determined by inductively-coupled plasma atomic emission spectrometry. The total concentration of heavy metals in the measured sludge samples did not exceed the limits set out by European legislation and were mainly associated with the two less-available fractions (27-28% as oxidizable metal and 44-50% as residual metal). However, metals as Co (64% in MS and 52% in FS samples), Mn (82% in MS and 79% in FS), Ni (32% in MS and 26% in FS) and Zn (79% in MS and 62% in FS) were present at important percentages as available forms. In addition, results showed a clear increase of the concentration of metals after sludge treatment in the proportion of two less-available fractions (oxidizable and residual metal).

  11. The sequential pathway between trauma-related symptom severity and cognitive-based smoking processes through perceived stress and negative affect reduction expectancies among trauma exposed smokers.

    PubMed

    Garey, Lorra; Cheema, Mina K; Otal, Tanveer K; Schmidt, Norman B; Neighbors, Clayton; Zvolensky, Michael J

    2016-10-01

    Smoking rates are markedly higher among trauma-exposed individuals relative to non-trauma-exposed individuals. Extant work suggests that both perceived stress and negative affect reduction smoking expectancies are independent mechanisms that link trauma-related symptoms and smoking. Yet, no work has examined perceived stress and negative affect reduction smoking expectancies as potential explanatory variables for the relation between trauma-related symptom severity and smoking in a sequential pathway model. Methods The present study utilized a sample of treatment-seeking, trauma-exposed smokers (n = 363; 49.0% female) to examine perceived stress and negative affect reduction expectancies for smoking as potential sequential explanatory variables linking trauma-related symptom severity and nicotine dependence, perceived barriers to smoking cessation, and severity of withdrawal-related problems and symptoms during past quit attempts. As hypothesized, perceived stress and negative affect reduction expectancies had a significant sequential indirect effect on trauma-related symptom severity and criterion variables. Findings further elucidate the complex pathways through which trauma-related symptoms contribute to smoking behavior and cognitions, and highlight the importance of addressing perceived stress and negative affect reduction expectancies in smoking cessation programs among trauma-exposed individuals. (Am J Addict 2016;25:565-572). © 2016 American Academy of Addiction Psychiatry.

  12. RACE/A: An Architectural Account of the Interactions between Learning, Task Control, and Retrieval Dynamics

    ERIC Educational Resources Information Center

    van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels

    2012-01-01

    This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use…

  13. Binding intensity and metal partitioning in soils affected by mining and smelting activities in Minas Gerais, Brazil.

    PubMed

    Lopes, G; Costa, E T S; Penido, E S; Sparks, D L; Guilherme, L R G

    2015-09-01

    Mining and smelting activities are potential sources of heavy metal contamination, which pose a threat to human health and ecological systems. This study investigated single and sequential extractions of Zn, Pb, and Cd in Brazilian soils affected by mining and smelting activities. Soils from a Zn mining area (soils A, B, C, D, E, and the control soil) and a tailing from a smelting area were collected in Minas Gerais state, Brazil. The samples were subjected to single (using Mehlich I solution) and sequential extractions. The risk assessment code (RAC), the redistribution index (U ts ), and the reduced partition index (I R ) have been applied to the sequential extraction data. Zinc and Cd, in soil samples from the mining area, were found mainly associated with carbonate forms. This same pattern did not occur for Pb. Moreover, the Fe-Mn oxides and residual fractions had important contributions for Zn and Pb in those soils. For the tailing, more than 70 % of Zn and Cd were released in the exchangeable fraction, showing a much higher mobility and availability of these metals at this site, which was also supported by results of RAC and I R . These differences in terms of mobility might be due to different chemical forms of the metals in the two sites, which are attributable to natural occurrence as well as ore processing.

  14. Testing sequential extraction methods for the analysis of multiple stable isotope systems from a bone sample

    NASA Astrophysics Data System (ADS)

    Sahlstedt, Elina; Arppe, Laura

    2017-04-01

    Stable isotope composition of bones, analysed either from the mineral phase (hydroxyapatite) or from the organic phase (mainly collagen) carry important climatological and ecological information and are therefore widely used in paleontological and archaeological research. For the analysis of the stable isotope compositions, both of the phases, hydroxyapatite and collagen, have their more or less well established separation and analytical techniques. Recent development in IRMS and wet chemical extraction methods have facilitated the analysis of very small bone fractions (500 μg or less starting material) for PO43-O isotope composition. However, the uniqueness and (pre-) historical value of each archaeological and paleontological finding lead to preciously little material available for stable isotope analyses, encouraging further development of microanalytical methods for the use of stable isotope analyses. Here we present the first results in developing extraction methods for combining collagen C- and N-isotope analyses to PO43-O-isotope analyses from a single bone sample fraction. We tested sequential extraction starting with dilute acid demineralization and collection of both collagen and PO43-fractions, followed by further purification step by H2O2 (PO43-fraction). First results show that bone sample separates as small as 2 mg may be analysed for their δ15N, δ13C and δ18OPO4 values. The method may be incorporated in detailed investigation of sequentially developing skeletal material such as teeth, potentially allowing for the investigation of interannual variability in climatological/environmental signals or investigation of the early life history of an individual.

  15. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    PubMed

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  16. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Man, Jun; Zhang, Jiangjiang; Li, Weixuan

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less

  17. Cost-benefit analysis of sequential warning lights in nighttime work zone tapers.

    DOT National Transportation Integrated Search

    2011-06-01

    Improving safety at nighttime work zones is important because of the extra visibility concerns. The deployment of sequential lights is an innovative method for improving driver recognition of lane closures and work zone tapers. Sequential lights are ...

  18. Lexical Diversity and Omission Errors as Predictors of Language Ability in the Narratives of Sequential Spanish-English Bilinguals: A Cross-Language Comparison

    ERIC Educational Resources Information Center

    Jacobson, Peggy F.; Walden, Patrick R.

    2013-01-01

    Purpose: This study explored the utility of language sample analysis for evaluating language ability in school-age Spanish-English sequential bilingual children. Specifically, the relative potential of lexical diversity and word/morpheme omission as predictors of typical or atypical language status was evaluated. Method: Narrative samples were…

  19. Type I and Type II Error Rates and Overall Accuracy of the Revised Parallel Analysis Method for Determining the Number of Factors

    ERIC Educational Resources Information Center

    Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo

    2015-01-01

    Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…

  20. 40 CFR 53.34 - Test procedure for methods for PM10 and Class I methods for PM2.5.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... simultaneous PM10 or PM2.5 measurements as necessary (see table C-4 of this subpart), each set consisting of...) in appendix A to this subpart). (f) Sequential samplers. For sequential samplers, the sampler shall be configured for the maximum number of sequential samples and shall be set for automatic collection...

  1. Reduction of display artifacts by random sampling

    NASA Technical Reports Server (NTRS)

    Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.

    1983-01-01

    The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.

  2. Repeated significance tests of linear combinations of sensitivity and specificity of a diagnostic biomarker

    PubMed Central

    Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi

    2016-01-01

    A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768

  3. EBUS-Guided Cautery-Assisted Transbronchial Forceps Biopsies: Safety and Sensitivity Relative to Transbronchial Needle Aspiration

    PubMed Central

    Bramley, Kyle; Pisani, Margaret A.; Murphy, Terrence E.; Araujo, Katy; Homer, Robert; Puchalski, Jonathan

    2016-01-01

    Background EBUS-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, when larger “core” biopsy samples of malignant tissue are required, TBNA may not suffice. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsies (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. Methods Fifty unselected patients undergoing convex probe EBUS were prospectively enrolled. Under EBUS guidance, all lymph nodes ≥ 1 cm were sequentially biopsied using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported on a per-patient basis. Results There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). For analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis was based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis was based only on TBNA samples. In some cases only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. Conclusions The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided specimens for clinical trials of malignancy when needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. PMID:26912301

  4. Endobronchial Ultrasound-Guided Cautery-Assisted Transbronchial Forceps Biopsies: Safety and Sensitivity Relative to Transbronchial Needle Aspiration.

    PubMed

    Bramley, Kyle; Pisani, Margaret A; Murphy, Terrence E; Araujo, Katy L; Homer, Robert J; Puchalski, Jonathan T

    2016-05-01

    Endobronchial ultrasound (EBUS)-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, TBNA may not suffice when larger "core biopsy" samples of malignant tissue are required. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsy (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. The study prospectively enrolled 50 unselected patients undergoing convex-probe EBUS. All lymph nodes exceeding 1 cm were sequentially biopsied under EBUS guidance using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported for each patient. There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). On the one hand, for analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis were based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis were based only on TBNA samples. In some patients, only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided adequate specimens for clinical trials of malignancy when specimens from needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  5. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    PubMed Central

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  6. Determination of nitrite and nitrate in water samples by an automated hydrodynamic sequential injection method.

    PubMed

    Somnam, Sarawut; Jakmunee, Jaroon; Grudpan, Kate; Lenghor, Narong; Motomizu, Shoji

    2008-12-01

    An automated hydrodynamic sequential injection (HSI) system with spectrophotometric detection was developed. Thanks to the hydrodynamic injection principle, simple devices can be used for introducing reproducible microliter volumes of both sample and reagent into the flow channel to form stacked zones in a similar fashion to those in a sequential injection system. The zones were then pushed to the detector and a peak profile was recorded. The determination of nitrite and nitrate in water samples by employing the Griess reaction was chosen as a model. Calibration graphs with linearity in the range of 0.7 - 40 muM were obtained for both nitrite and nitrate. Detection limits were found to be 0.3 muM NO(2)(-) and 0.4 muM NO(3)(-), respectively, with a sample throughput of 20 h(-1) for consecutive determination of both the species. The developed system was successfully applied to the analysis of water samples, employing simple and cost-effective instrumentation and offering higher degrees of automation and low chemical consumption.

  7. Mapping of compositional properties of coal using isometric log-ratio transformation and sequential Gaussian simulation - A comparative study for spatial ultimate analyses data.

    PubMed

    Karacan, C Özgen; Olea, Ricardo A

    2018-03-01

    Chemical properties of coal largely determine coal handling, processing, beneficiation methods, and design of coal-fired power plants. Furthermore, these properties impact coal strength, coal blending during mining, as well as coal's gas content, which is important for mining safety. In order for these processes and quantitative predictions to be successful, safer, and economically feasible, it is important to determine and map chemical properties of coals accurately in order to infer these properties prior to mining. Ultimate analysis quantifies principal chemical elements in coal. These elements are C, H, N, S, O, and, depending on the basis, ash, and/or moisture. The basis for the data is determined by the condition of the sample at the time of analysis, with an "as-received" basis being the closest to sampling conditions and thus to the in-situ conditions of the coal. The parts determined or calculated as the result of ultimate analyses are compositions, reported in weight percent, and pose the challenges of statistical analyses of compositional data. The treatment of parts using proper compositional methods may be even more important in mapping them, as most mapping methods carry uncertainty due to partial sampling as well. In this work, we map the ultimate analyses parts of the Springfield coal from an Indiana section of the Illinois basin, USA, using sequential Gaussian simulation of isometric log-ratio transformed compositions. We compare the results with those of direct simulations of compositional parts. We also compare the implications of these approaches in calculating other properties using correlations to identify the differences and consequences. Although the study here is for coal, the methods described in the paper are applicable to any situation involving compositional data and its mapping.

  8. Decision making and sequential sampling from memory

    PubMed Central

    Shadlen, Michael N.; Shohamy, Daphna

    2016-01-01

    Decisions take time, and as a rule more difficult decisions take more time. But this only raises the question of what consumes the time. For decisions informed by a sequence of samples of evidence, the answer is straightforward: more samples are available with more time. Indeed the speed and accuracy of such decisions are explained by the accumulation of evidence to a threshold or bound. However, the same framework seems to apply to decisions that are not obviously informed by sequences of evidence samples. Here we proffer the hypothesis that the sequential character of such tasks involves retrieval of evidence from memory. We explore this hypothesis by focusing on value-based decisions and argue that mnemonic processes can account for regularities in choice and decision time. We speculate on the neural mechanisms that link sampling of evidence from memory to circuits that represent the accumulated evidence bearing on a choice. We propose that memory processes may contribute to a wider class of decisions that conform to the regularities of choice-reaction time predicted by the sequential sampling framework. PMID:27253447

  9. Mixing modes in a population-based interview survey: comparison of a sequential and a concurrent mixed-mode design for public health research.

    PubMed

    Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia

    2018-01-01

    Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n  = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n  = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between the two designs. Modelling these results for higher response rates and larger net sample sizes indicated that the sequential design was more cost and time-effective. This study contributes to the research available on implementing mixed-mode designs as part of public health surveys. Our findings show that SAQ-Paper and SAQ-Web questionnaires can be combined effectively. Sequential mixed-mode designs with higher rates of online respondents may be of greater benefit to studies with larger net sample sizes than concurrent mixed-mode designs.

  10. Continuity of the sequential product of sequential quantum effect algebras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Qiang, E-mail: leiqiang@hit.edu.cn; Su, Xiaochao, E-mail: hitswh@163.com; Wu, Junde, E-mail: wjd@zju.edu.cn

    In order to study quantum measurement theory, sequential product defined by A∘B = A{sup 1/2}BA{sup 1/2} for any two quantum effects A, B has been introduced. Physically motivated conditions ask the sequential product to be continuous with respect to the strong operator topology. In this paper, we study the continuity problems of the sequential product A∘B = A{sup 1/2}BA{sup 1/2} with respect to other important topologies, such as norm topology, weak operator topology, order topology, and interval topology.

  11. Miniaturizing and automation of free acidity measurements for uranium (VI)-HNO3 solutions: Development of a new sequential injection analysis for a sustainable radio-analytical chemistry.

    PubMed

    Néri-Quiroz, José; Canto, Fabrice; Guillerme, Laurent; Couston, Laurent; Magnaldo, Alastair; Dugas, Vincent

    2016-10-01

    A miniaturized and automated approach for the determination of free acidity in solutions containing uranium (VI) is presented. The measurement technique is based on the concept of sequential injection analysis with on-line spectroscopic detection. The proposed methodology relies on the complexation and alkalimetric titration of nitric acid using a pH 5.6 sodium oxalate solution. The titration process is followed by UV/VIS detection at 650nm thanks to addition of Congo red as universal pH indicator. Mixing sequence as well as method validity was investigated by numerical simulation. This new analytical design allows fast (2.3min), reliable and accurate free acidity determination of low volume samples (10µL) containing uranium/[H(+)] moles ratio of 1:3 with relative standard deviation of <7.0% (n=11). The linearity range of the free nitric acid measurement is excellent up to 2.77molL(-1) with a correlation coefficient (R(2)) of 0.995. The method is specific, presence of actinide ions up to 0.54molL(-1) does not interfere on the determination of free nitric acid. In addition to automation, the developed sequential injection analysis method greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight fold. These analytical parameters are important especially in nuclear-related applications to improve laboratory safety, personnel exposure to radioactive samples and to drastically reduce environmental impacts or analytical radioactive waste. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    PubMed Central

    Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.

    2014-01-01

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm3 FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations. PMID:25186406

  13. A sequential sampling account of response bias and speed-accuracy tradeoffs in a conflict detection task.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew

    2014-03-01

    Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association

  14. Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images

    PubMed Central

    Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun

    2013-01-01

    This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608

  15. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  16. Development of a sequential workflow based on LC-PRM for the verification of endometrial cancer protein biomarkers in uterine aspirate samples.

    PubMed

    Martinez-Garcia, Elena; Lesur, Antoine; Devis, Laura; Campos, Alexandre; Cabrera, Silvia; van Oostrum, Jan; Matias-Guiu, Xavier; Gil-Moreno, Antonio; Reventos, Jaume; Colas, Eva; Domon, Bruno

    2016-08-16

    About 30% of endometrial cancer (EC) patients are diagnosed at an advanced stage of the disease, which is associated with a drastic decrease in the 5-year survival rate. The identification of biomarkers in uterine aspirate samples, which are collected by a minimally invasive procedure, would improve early diagnosis of EC. We present a sequential workflow to select from a list of potential EC biomarkers, those which are the most promising to enter a validation study. After the elimination of confounding contributions by residual blood proteins, 52 potential biomarkers were analyzed in uterine aspirates from 20 EC patients and 18 non-EC controls by a high-resolution accurate mass spectrometer operated in parallel reaction monitoring mode. The differential abundance of 26 biomarkers was observed, and among them ten proteins showed a high sensitivity and specificity (AUC > 0.9). The study demonstrates that uterine aspirates are valuable samples for EC protein biomarkers screening. It also illustrates the importance of a biomarker verification phase to fill the gap between discovery and validation studies and highlights the benefits of high resolution mass spectrometry for this purpose. The proteins verified in this study have an increased likelihood to become a clinical assay after a subsequent validation phase.

  17. Self-supervised online metric learning with low rank constraint for scene categorization.

    PubMed

    Cong, Yang; Liu, Ji; Yuan, Junsong; Luo, Jiebo

    2013-08-01

    Conventional visual recognition systems usually train an image classifier in a bath mode with all training data provided in advance. However, in many practical applications, only a small amount of training samples are available in the beginning and many more would come sequentially during online recognition. Because the image data characteristics could change over time, it is important for the classifier to adapt to the new data incrementally. In this paper, we present an online metric learning method to address the online scene recognition problem via adaptive similarity measurement. Given a number of labeled data followed by a sequential input of unseen testing samples, the similarity metric is learned to maximize the margin of the distance among different classes of samples. By considering the low rank constraint, our online metric learning model not only can provide competitive performance compared with the state-of-the-art methods, but also guarantees convergence. A bi-linear graph is also defined to model the pair-wise similarity, and an unseen sample is labeled depending on the graph-based label propagation, while the model can also self-update using the more confident new samples. With the ability of online learning, our methodology can well handle the large-scale streaming video data with the ability of incremental self-updating. We evaluate our model to online scene categorization and experiments on various benchmark datasets and comparisons with state-of-the-art methods demonstrate the effectiveness and efficiency of our algorithm.

  18. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  19. Sequential Designs Based on Bayesian Uncertainty Quantification in Sparse Representation Surrogate Modeling

    DOE PAGES

    Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.

    2017-04-12

    A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less

  20. Sequential sampling of visual objects during sustained attention.

    PubMed

    Jia, Jianrong; Liu, Ling; Fang, Fang; Luo, Huan

    2017-06-01

    In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG) and a temporal response function (TRF) approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz) activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest that selective attention, in addition to the classically posited attentional "focus," involves a dynamic mechanism for monitoring all objects outside of the focus. Our findings also suggest that attention implements a space (object)-to-time transformation by acting as a series of concatenating attentional chunks that operate on 1 object at a time.

  1. Sequential sampling of visual objects during sustained attention

    PubMed Central

    Jia, Jianrong; Liu, Ling; Fang, Fang

    2017-01-01

    In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG) and a temporal response function (TRF) approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz) activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest that selective attention, in addition to the classically posited attentional “focus,” involves a dynamic mechanism for monitoring all objects outside of the focus. Our findings also suggest that attention implements a space (object)-to-time transformation by acting as a series of concatenating attentional chunks that operate on 1 object at a time. PMID:28658261

  2. Optimal flexible sample size design with robust power.

    PubMed

    Zhang, Lanju; Cui, Lu; Yang, Bo

    2016-08-30

    It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Effects of model complexity and priors on estimation using sequential importance sampling/resampling for species conservation

    USGS Publications Warehouse

    Dunham, Kylee; Grand, James B.

    2016-01-01

    We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.

  4. Aging and sequential modulations of poorer strategy effects: An EEG study in arithmetic problem solving.

    PubMed

    Hinault, Thomas; Lemaire, Patrick; Phillips, Natalie

    2016-01-01

    This study investigated age-related differences in electrophysiological signatures of sequential modulations of poorer strategy effects. Sequential modulations of poorer strategy effects refer to decreased poorer strategy effects (i.e., poorer performance when the cued strategy is not the best) on current problem following poorer strategy problems compared to after better strategy problems. Analyses on electrophysiological (EEG) data revealed important age-related changes in time, frequency, and coherence of brain activities underlying sequential modulations of poorer strategy effects. More specifically, sequential modulations of poorer strategy effects were associated with earlier and later time windows (i.e., between 200- and 550 ms and between 850- and 1250 ms). Event-related potentials (ERPs) also revealed an earlier onset in older adults, together with more anterior and less lateralized activations. Furthermore, sequential modulations of poorer strategy effects were associated with theta and alpha frequencies in young adults while these modulations were found in delta frequency and theta inter-hemispheric coherence in older adults, consistent with qualitatively distinct patterns of brain activity. These findings have important implications to further our understanding of age-related differences and similarities in sequential modulations of cognitive control processes during arithmetic strategy execution. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Sampling strategies for subsampled segmented EPI PRF thermometry in MR guided high intensity focused ultrasound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odéen, Henrik, E-mail: h.odeen@gmail.com; Diakite, Mahamadou; Todd, Nick

    2014-09-15

    Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemesmore » utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm{sup 3} FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations.« less

  6. Sequential Leaching of Chromium Contaminated Sediments - A Study Characterizing Natural Attenuation

    NASA Astrophysics Data System (ADS)

    Musa, D.; Ding, M.; Beroff, S.; Rearick, M.; Perkins, G.; WoldeGabriel, G. W.; Ware, D.; Harris, R.; Kluk, E.; Katzman, D.; Reimus, P. W.; Heikoop, J. M.

    2015-12-01

    Natural attenuation is an important process in slowing down the transport of hexavalent chromium, Cr(VI), an anthropogenic environmental contaminant, either by adsorption of Cr(VI) to sediments, or by reduction to nontoxic trivalent chromium, Cr(III). The capacity and mechanism of attenuation is explored in this sequential leaching study of different particle size fractions of chromium contaminated sediments and similar uncontaminated sediments from the regional aquifer near Los Alamos, New Mexico. Using this leaching protocol each sediment sample is split in two: one half is leached three times using a 0.1 M sodium bicarbonate/carbonate solution, while the second half is leached three times using a 0.01 M nitric acid, followed by two consecutively increasing magnitudes of nitric acid concentrations. Based on the amphoteric nature of chromium, alkaline leaching is used to establish the amount of Cr(VI) sorbed on the sediment, whereas acid leaching is used to establish the amount of Cr(III). The weak acid is predicted to release the attenuated anthropogenic Cr(III), without affecting Cr-bearing minerals. The sequential, stronger, acid is anticipated to leach Cr(III)-incorporated in the minerals. The efficiency and validation of the sequential leaching method is assessed by comparing the leaching behavior of bentonite and biotite samples, with and without loaded Cr(VI). A 97% chromium mass balance of leached Cr(VI)-loaded bentonite and biotite proves the viability of this method for further use on leaching contaminated sediments. By comparing contaminated and uncontaminated sediment leachate results, of chromium and other major and trace elements, the signature of anthropogenic chromium is determined. Further mineralogical characterization of the sediments provides a quantitative measure of the natural attenuation capacity for chromium. Understanding these results is pertinent in delineating the optimal procedure for the remediation of Cr(VI) in the regional aquifer near Los Alamos.

  7. Phosphorus Concentrations in Sequentially Fractionated Soil Samples as Affected by Digestion Methods

    PubMed Central

    do Nascimento, Carlos A. C.; Pagliari, Paulo H.; Schmitt, Djalma; He, Zhongqi; Waldrip, Heidi

    2015-01-01

    Sequential fractionation has helped improving our understanding of the lability and bioavailability of P in soil. Nevertheless, there have been no reports on how manipulation of the different fractions prior to analyses affects the total P (TP) concentrations measured. This study investigated the effects of sample digestion, filtration, and acidification on the TP concentrations determined by ICP-OES in 20 soil samples. Total P in extracts were either determined without digestion by ICP-OES, or ICP-OES following block digestion, or autoclave digestion. The effects of sample filtration, and acidification on undigested alkaline extracts prior to ICP-OES were also evaluated. Results showed that, TP concentrations were greatest in the block-digested extracts, though the variability introduced by the block-digestion was the highest. Acidification of NaHCO3 extracts resulted in lower TP concentrations, while acidification of NaOH randomly increased or decreased TP concentrations. The precision observed with ICP-OES of undigested extracts suggests this should be the preferred method for TP determination in sequentially extracted samples. Thus, observations reported in this work would be helpful in appropriate sample handling for P determination, thereby improving the precision of P determination. The results are also useful for literature data comparison and discussion when there are differences in sample treatments. PMID:26647644

  8. Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Abotteen, K. M. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.

  9. Phosphorus concentrations in sequentially fractionated soil samples as affected by digestion methods

    USDA-ARS?s Scientific Manuscript database

    Sequential fractionation has been used for several decades for improving our understanding on the effects of agricultural practices and management on the lability and bioavailability of phosphorus in soil, manure, and other soil amendments. Nevertheless, there have been no reports on how manipulatio...

  10. Context-dependent decision-making: a simple Bayesian model

    PubMed Central

    Lloyd, Kevin; Leslie, David S.

    2013-01-01

    Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or ‘contexts’ allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects. PMID:23427101

  11. Context-dependent decision-making: a simple Bayesian model.

    PubMed

    Lloyd, Kevin; Leslie, David S

    2013-05-06

    Many phenomena in animal learning can be explained by a context-learning process whereby an animal learns about different patterns of relationship between environmental variables. Differentiating between such environmental regimes or 'contexts' allows an animal to rapidly adapt its behaviour when context changes occur. The current work views animals as making sequential inferences about current context identity in a world assumed to be relatively stable but also capable of rapid switches to previously observed or entirely new contexts. We describe a novel decision-making model in which contexts are assumed to follow a Chinese restaurant process with inertia and full Bayesian inference is approximated by a sequential-sampling scheme in which only a single hypothesis about current context is maintained. Actions are selected via Thompson sampling, allowing uncertainty in parameters to drive exploration in a straightforward manner. The model is tested on simple two-alternative choice problems with switching reinforcement schedules and the results compared with rat behavioural data from a number of T-maze studies. The model successfully replicates a number of important behavioural effects: spontaneous recovery, the effect of partial reinforcement on extinction and reversal, the overtraining reversal effect, and serial reversal-learning effects.

  12. Bayesian Treed Multivariate Gaussian Process with Adaptive Design: Application to a Carbon Capture Unit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik

    2014-05-16

    Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Montemore » Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.« less

  13. Backloading in the Sequential Lineup Prevents Within-Lineup Criterion Shifts that Undermine Eyewitness Identification Performance

    ERIC Educational Resources Information Center

    Horry, Ruth; Palmer, Matthew A.; Brewer, Neil

    2012-01-01

    Although the sequential lineup has been proposed as a means of protecting innocent suspects from mistaken identification, little is known about the importance of various aspects of the procedure. One potentially important detail is that witnesses should not know how many people are in the lineup. This is sometimes achieved by…

  14. Sample size determination in group-sequential clinical trials with two co-primary endpoints

    PubMed Central

    Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi

    2014-01-01

    We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799

  15. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  16. ASSESSMENT OF A SEQUENTIAL EXTRACTION PROCEDURE FOR PERTURBED LEAD-CONTAMINATED SAMPLES WITH AND WITHOUT PHOSPHOROUS AMENDMENTS

    EPA Science Inventory

    Sequential extraction procedures are used to determine the solid-phase association in which elements of interest exist in soil and sediment matrices. Foundational work by Tessier et al. (1) has found widespread acceptance and has worked tolerably as an operational definition for...

  17. Sequential Requests and the Problem of Message Sampling.

    ERIC Educational Resources Information Center

    Cantrill, James Gerard

    S. Jackson and S. Jacobs's criticism of "single message" designs in communication research served as a framework for a study that examined the differences between various sequential request paradigms. The study sought to answer the following questions: (1) What were the most naturalistic request sequences assured to replicate…

  18. Phosphorus concentrations in sequentially fractionated soil samples as affected by digestion methods

    USDA-ARS?s Scientific Manuscript database

    Sequential fractionation has been used for several decades for improving our understanding on the effects of agricultural practices and management on the lability and bioavailability of P in soil, manure, and other soil amendments. Nevertheless, there have been no reports on how manipulation of diff...

  19. Sequential Multiplex Analyte Capturing for Phosphoprotein Profiling*

    PubMed Central

    Poetz, Oliver; Henzler, Tanja; Hartmann, Michael; Kazmaier, Cornelia; Templin, Markus F.; Herget, Thomas; Joos, Thomas O.

    2010-01-01

    Microarray-based sandwich immunoassays can simultaneously detect dozens of proteins. However, their use in quantifying large numbers of proteins is hampered by cross-reactivity and incompatibilities caused by the immunoassays themselves. Sequential multiplex analyte capturing addresses these problems by repeatedly probing the same sample with different sets of antibody-coated, magnetic suspension bead arrays. As a miniaturized immunoassay format, suspension bead array-based assays fulfill the criteria of the ambient analyte theory, and our experiments reveal that the analyte concentrations are not significantly changed. The value of sequential multiplex analyte capturing was demonstrated by probing tumor cell line lysates for the abundance of seven different receptor tyrosine kinases and their degree of phosphorylation and by measuring the complex phosphorylation pattern of the epidermal growth factor receptor in the same sample from the same cavity. PMID:20682761

  20. Development of a sequential injection-square wave voltammetry method for determination of paraquat in water samples employing the hanging mercury drop electrode.

    PubMed

    dos Santos, Luciana B O; Infante, Carlos M C; Masini, Jorge C

    2010-03-01

    This work describes the development and optimization of a sequential injection method to automate the determination of paraquat by square-wave voltammetry employing a hanging mercury drop electrode. Automation by sequential injection enhanced the sampling throughput, improving the sensitivity and precision of the measurements as a consequence of the highly reproducible and efficient conditions of mass transport of the analyte toward the electrode surface. For instance, 212 analyses can be made per hour if the sample/standard solution is prepared off-line and the sequential injection system is used just to inject the solution towards the flow cell. In-line sample conditioning reduces the sampling frequency to 44 h(-1). Experiments were performed in 0.10 M NaCl, which was the carrier solution, using a frequency of 200 Hz, a pulse height of 25 mV, a potential step of 2 mV, and a flow rate of 100 µL s(-1). For a concentration range between 0.010 and 0.25 mg L(-1), the current (i(p), µA) read at the potential corresponding to the peak maximum fitted the following linear equation with the paraquat concentration (mg L(-1)): i(p) = (-20.5 ± 0.3)C (paraquat) - (0.02 ± 0.03). The limits of detection and quantification were 2.0 and 7.0 µg L(-1), respectively. The accuracy of the method was evaluated by recovery studies using spiked water samples that were also analyzed by molecular absorption spectrophotometry after reduction of paraquat with sodium dithionite in an alkaline medium. No evidence of statistically significant differences between the two methods was observed at the 95% confidence level.

  1. Determination of the mode of occurrence of As, Cr, and Hg in three Chinese coal samples by sequential acid leaching

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, B.; Li, W.; Wang, G.

    2007-07-01

    Sequential acid leaching was used to leach minerals and the trace elements they contain. One-step leaching uses concentrated nitric acid as solvent, while three-step leaching uses 5M hydrochloric acid, concentrated hydrofluoric acid, and concentrated hydrochloric acid as solvents. The sequential acid leaching by three-and one-step leach was also examined. The results showed that one-step leaching could leach over 80% of arsenic from coal samples, and also could leach mercury to a certain degree. During one-step leaching, little chromium is removed, but it is available to leach by three-step leaching; and during the sequential acid leaching by three and one-step leaching,more » almost 98% ash is leached. The result of acid leaching could also give detailed information on mode of occurrence of As, Cr, and Hg, which could be classified into: silicate association, pyrite association, organic association, and carbonates and sulfates association. Over half of chromium in the three coals is associated with organic matters and the rest is associated with silicates. The mode of occurrence of arsenic and mercury is mainly associated with different mineral matters depending on the coal samples studied.« less

  2. Compact, Automated Centrifugal Slide-Staining System

    NASA Technical Reports Server (NTRS)

    Feeback, Daniel L.; Clarke, Mark S. F.

    2004-01-01

    The Directional Acceleration Vector-Driven Displacement of Fluids (DAVD-DOF) system, under development at the time of reporting the information for this article, would be a relatively compact, automated, centrifugally actuated system for staining blood smears and other microbiological samples on glass microscope slides in either a microgravitational or a normal Earth gravitational environment. The DAVD-DOF concept is a successor to the centrifuge-operated slide stainer (COSS) concept, which was reported in Slide-Staining System for Microgravity or Gravity (MSC-22949), NASA Tech Briefs, Vol. 25, No. 1 (January, 2001), page 64. The COSS includes reservoirs and a staining chamber that contains a microscope slide to which a biological sample is affixed. The staining chamber is sequentially filled with and drained of staining and related liquids from the reservoirs by use of a weighted plunger to force liquid from one reservoir to another at a constant level of hypergravity maintained in a standard swing-bucket centrifuge. In the DAVD-DOF system, a staining chamber containing a sample would also be sequentially filled and emptied, but with important differences. Instead of a simple microscope slide, one would use a special microscope slide on which would be fabricated a network of very small reservoirs and narrow channels connected to a staining chamber (see figure). Unlike in the COSS, displacement of liquid would be effected by use of the weight of the liquid itself, rather than the weight of a plunger.

  3. Influence of Multidimensionality on Convergence of Sampling in Protein Simulation

    NASA Astrophysics Data System (ADS)

    Metsugi, Shoichi

    2005-06-01

    We study the problem of convergence of sampling in protein simulation originating in the multidimensionality of protein’s conformational space. Since several important physical quantities are given by second moments of dynamical variables, we attempt to obtain the time of simulation necessary for their sufficient convergence. We perform a molecular dynamics simulation of a protein and the subsequent principal component (PC) analysis as a function of simulation time T. As T increases, PC vectors with smaller amplitude of variations are identified and their amplitudes are equilibrated before identifying and equilibrating vectors with larger amplitude of variations. This sequential identification and equilibration mechanism makes protein simulation a useful method although it has an intrinsic multidimensional nature.

  4. Sampling methods, dispersion patterns, and fixed precision sequential sampling plans for western flower thrips (Thysanoptera: Thripidae) and cotton fleahoppers (Hemiptera: Miridae) in cotton.

    PubMed

    Parajulee, M N; Shrestha, R B; Leser, J F

    2006-04-01

    A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.

  5. Kullback-Leibler information function and the sequential selection of experiments to discriminate among several linear models

    NASA Technical Reports Server (NTRS)

    Sidik, S. M.

    1972-01-01

    The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.

  6. Spatial Distribution and Sampling Plans for Grapevine Plant Canopy-Inhabiting Scaphoideus titanus (Hemiptera: Cicadellidae) Nymphs.

    PubMed

    Rigamonti, Ivo E; Brambilla, Carla; Colleoni, Emanuele; Jermini, Mauro; Trivellone, Valeria; Baumgärtner, Johann

    2016-04-01

    The paper deals with the study of the spatial distribution and the design of sampling plans for estimating nymph densities of the grape leafhopper Scaphoideus titanus Ball in vine plant canopies. In a reference vineyard sampled for model parameterization, leaf samples were repeatedly taken according to a multistage, stratified, random sampling procedure, and data were subjected to an ANOVA. There were no significant differences in density neither among the strata within the vineyard nor between the two strata with basal and apical leaves. The significant differences between densities on trunk and productive shoots led to the adoption of two-stage (leaves and plants) and three-stage (leaves, shoots, and plants) sampling plans for trunk shoots- and productive shoots-inhabiting individuals, respectively. The mean crowding to mean relationship used to analyze the nymphs spatial distribution revealed aggregated distributions. In both the enumerative and the sequential enumerative sampling plans, the number of leaves of trunk shoots, and of leaves and shoots of productive shoots, was kept constant while the number of plants varied. In additional vineyards data were collected and used to test the applicability of the distribution model and the sampling plans. The tests confirmed the applicability 1) of the mean crowding to mean regression model on the plant and leaf stages for representing trunk shoot-inhabiting distributions, and on the plant, shoot, and leaf stages for productive shoot-inhabiting nymphs, 2) of the enumerative sampling plan, and 3) of the sequential enumerative sampling plan. In general, sequential enumerative sampling was more cost efficient than enumerative sampling.

  7. Sequential and Simultaneous Processing in Children with Learning Disabilities: An Attempted Replication.

    ERIC Educational Resources Information Center

    Bain, Sherry K.

    1993-01-01

    Analysis of Kaufman Assessment Battery for Children (K-ABC) Sequential and Simultaneous Processing scores of 94 children (ages 6-12) with learning disabilities produced factor patterns generally supportive of the traditional K-ABC Mental Processing structure with the exception of Spatial Memory. The sample exhibited relative processing strengths…

  8. A Process Improvement Evaluation of Sequential Compression Device Compliance and Effects of Provider Intervention.

    PubMed

    Beachler, Jason A; Krueger, Chad A; Johnson, Anthony E

    This process improvement study sought to evaluate the compliance in orthopaedic patients with sequential compression devices and to monitor any improvement in compliance following an educational intervention. All non-intensive care unit orthopaedic primary patients were evaluated at random times and their compliance with sequential compression devices was monitored and recorded. Following a 2-week period of data collection, an educational flyer was displayed in every patient's room and nursing staff held an in-service training event focusing on the importance of sequential compression device use in the surgical patient. Patients were then monitored, again at random, and compliance was recorded. With the addition of a simple flyer and a single in-service on the importance of mechanical compression in the surgical patient, a significant improvement in compliance was documented at the authors' institution from 28% to 59% (p < .0001).

  9. Emotional neglect in childhood shapes social dysfunctioning in adults by influencing the oxytocin and the attachment system: Results from a population-based study.

    PubMed

    Müller, Laura E; Bertsch, Katja; Bülau, Konstatin; Herpertz, Sabine C; Buchheim, Anna

    2018-06-01

    Early life maltreatment (ELM) is the major single risk factor for impairments in social functioning and mental health in adulthood. One of the most prevalent and most rapidly increasing forms of ELM is emotional neglect. According to bio-behavioral synchrony assumptions, the oxytocin and attachment systems play an important mediating role in the interplay between emotional neglect and social dysfunctioning. Therefore, the aim of the present study was to investigate whether fear and avoidance of social functioning, two important and highly prevalent facets of social dysfunctioning in adulthood, are shaped by emotional neglect, plasma oxytocin levels and attachment representations. We assessed emotional neglect as well as other forms of ELM with the Childhood Trauma Questionnaire, current attachment representations with the Adult Attachment Projective Picture System, and fear and avoidance of social situations with the Liebowitz Social Anxiety Scale in a population-based sample of N = 121 men and women. Furthermore, 4.9 ml blood samples were drawn from each participant to assess peripheral plasma oxytocin levels. Applying a sequential mediation model, results revealed that emotional neglect was associated with lower plasma oxytocin levels which in turn were associated with insecure attachment representations which were related to elevated fear and avoidance of social situations (a 1 d 21 b 2 : F 3,117  = 20.84, P < .001). Plasma oxytocin and current attachment representations hence fully and sequentially mediate the effects of emotional neglect on social fear and avoidance, two important facets of adult social dysfunctioning, confirming bio-behavioral synchrony assumptions. Copyright © 2018. Published by Elsevier B.V.

  10. Diagnostic test accuracy and prevalence inferences based on joint and sequential testing with finite population sampling.

    PubMed

    Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O

    2004-07-30

    The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.

  11. Development of binomial sequential sampling plans for forecasting Listronotus maculicollis (Coleoptera: Curculionidae) larvae based on the relationship to adult counts and turfgrass damage.

    PubMed

    McGraw, Benjamin A; Koppenhöfer, Albrecht M

    2009-06-01

    Binomial sequential sampling plans were developed to forecast weevil Listronotus maculicollis Kirby (Coleoptera: Curculionidae), larval damage to golf course turfgrass and aid in the development of integrated pest management programs for the weevil. Populations of emerging overwintered adults were sampled over a 2-yr period to determine the relationship between adult counts, larval density, and turfgrass damage. Larval density and composition of preferred host plants (Poa annua L.) significantly affected the expression of turfgrass damage. Multiple regression indicates that damage may occur in moderately mixed P. annua stands with as few as 10 larvae per 0.09 m2. However, > 150 larvae were required before damage became apparent in pure Agrostis stolonifera L. plots. Adult counts during peaks in emergence as well as cumulative counts across the emergence period were significantly correlated to future densities of larvae. Eight binomial sequential sampling plans based on two tally thresholds for classifying infestation (T = 1 and two adults) and four adult density thresholds (0.5, 0.85, 1.15, and 1.35 per 3.34 m2) were developed to forecast the likelihood of turfgrass damage by using adult counts during peak emergence. Resampling for validation of sample plans software was used to validate sampling plans with field-collected data sets. All sampling plans were found to deliver accurate classifications (correct decisions were made between 84.4 and 96.8%) in a practical timeframe (average sampling cost < 22.7 min).

  12. Remote sensing data with the conditional latin hypercube sampling and geostatistical approach to delineate landscape changes induced by large chronological physical disturbances.

    PubMed

    Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh

    2009-01-01

    This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.

  13. The effect of a sequential structure of practice for the training of perceptual-cognitive skills in tennis

    PubMed Central

    2017-01-01

    Objective Anticipation of opponent actions, through the use of advanced (i.e., pre-event) kinematic information, can be trained using video-based temporal occlusion. Typically, this involves isolated opponent skills/shots presented as trials in a random order. However, two different areas of research concerning representative task design and contextual (non-kinematic) information, suggest this structure of practice restricts expert performance. The aim of this study was to examine the effect of a sequential structure of practice during video-based training of anticipatory behavior in tennis, as well as the transfer of these skills to the performance environment. Methods In a pre-practice-retention-transfer design, participants viewed life-sized video of tennis rallies across practice in either a sequential order (sequential group), in which participants were exposed to opponent skills/shots in the order they occur in the sport, or a non-sequential (non-sequential group) random order. Results In the video-based retention test, the sequential group was significantly more accurate in their anticipatory judgments when the retention condition replicated the sequential structure compared to the non-sequential group. In the non-sequential retention condition, the non-sequential group was more accurate than the sequential group. In the field-based transfer test, overall decision time was significantly faster in the sequential group compared to the non-sequential group. Conclusion Findings highlight the benefits of a sequential structure of practice for the transfer of anticipatory behavior in tennis. We discuss the role of contextual information, and the importance of representative task design, for the testing and training of perceptual-cognitive skills in sport. PMID:28355263

  14. Development of a syringe pump assisted dynamic headspace sampling technique for needle trap device.

    PubMed

    Eom, In-Yong; Niri, Vadoud H; Pawliszyn, Janusz

    2008-07-04

    This paper describes a new approach that combines needle trap devices (NTDs) with a dynamic headspace sampling technique (purge and trap) using a bidirectional syringe pump. The needle trap device is a 22-G stainless steel needle 3.5-in. long packed with divinylbenzene sorbent particles. The same sized needle, without packing, was used for purging purposes. We chose an aqueous mixture of benzene, toluene, ethylbenzene, and p-xylene (BTEX) and developed a sequential purge and trap (SPNT) method, in which sampling (trapping) and purging cycles were performed sequentially by the use of syringe pump with different distribution channels. In this technique, a certain volume (1 mL) of headspace was sequentially sampled using the needle trap; afterwards, the same volume of air was purged into the solution at a high flow rate. The proposed technique showed an effective extraction compared to the continuous purge and trap technique, with a minimal dilution effect. Method evaluation was also performed by obtaining the calibration graphs for aqueous BTEX solutions in the concentration range of 1-250 ng/mL. The developed technique was compared to the headspace solid-phase microextraction method for the analysis of aqueous BTEX samples. Detection limits as low as 1 ng/mL were obtained for BTEX by NTD-SPNT.

  15. Sequential-Injection Analysis: Principles, Instrument Construction, and Demonstration by a Simple Experiment

    ERIC Educational Resources Information Center

    Economou, A.; Tzanavaras, P. D.; Themelis, D. G.

    2005-01-01

    The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…

  16. Factor Analysis of the Kaufman Assessment Battery for Children (K-ABC) for Ages 2 1/2 through 12 1/2 Years.

    ERIC Educational Resources Information Center

    Kaufman, Alan S.; Kamphaus, Randy W.

    1984-01-01

    The construct validity of the Sequential Processing, Simultaneous Processing and Achievement scales of the Kaufman Assessment Battery for Children was supported by factor-analytic investigations of a representative national stratified sample of 2,000 children. Correlations provided insight into the relationship of sequential/simultaneous…

  17. GOST: A generic ordinal sequential trial design for a treatment trial in an emerging pandemic.

    PubMed

    Whitehead, John; Horby, Peter

    2017-03-01

    Conducting clinical trials to assess experimental treatments for potentially pandemic infectious diseases is challenging. Since many outbreaks of infectious diseases last only six to eight weeks, there is a need for trial designs that can be implemented rapidly in the face of uncertainty. Outbreaks are sudden and unpredictable and so it is essential that as much planning as possible takes place in advance. Statistical aspects of such trial designs should be evaluated and discussed in readiness for implementation. This paper proposes a generic ordinal sequential trial design (GOST) for a randomised clinical trial comparing an experimental treatment for an emerging infectious disease with standard care. The design is intended as an off-the-shelf, ready-to-use robust and flexible option. The primary endpoint is a categorisation of patient outcome according to an ordinal scale. A sequential approach is adopted, stopping as soon as it is clear that the experimental treatment has an advantage or that sufficient advantage is unlikely to be detected. The properties of the design are evaluated using large-sample theory and verified for moderate sized samples using simulation. The trial is powered to detect a generic clinically relevant difference: namely an odds ratio of 2 for better rather than worse outcomes. Total sample sizes (across both treatments) of between 150 and 300 patients prove to be adequate in many cases, but the precise value depends on both the magnitude of the treatment advantage and the nature of the ordinal scale. An advantage of the approach is that any erroneous assumptions made at the design stage about the proportion of patients falling into each outcome category have little effect on the error probabilities of the study, although they can lead to inaccurate forecasts of sample size. It is important and feasible to pre-determine many of the statistical aspects of an efficient trial design in advance of a disease outbreak. The design can then be tailored to the specific disease under study once its nature is better understood.

  18. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  19. [Using sequential indicator simulation method to define risk areas of soil heavy metals in farmland.

    PubMed

    Yang, Hao; Song, Ying Qiang; Hu, Yue Ming; Chen, Fei Xiang; Zhang, Rui

    2018-05-01

    The heavy metals in soil have serious impacts on safety, ecological environment and human health due to their toxicity and accumulation. It is necessary to efficiently identify the risk area of heavy metals in farmland soil, which is of important significance for environment protection, pollution warning and farmland risk control. We collected 204 samples and analyzed the contents of seven kinds of heavy metals (Cu, Zn, Pb, Cd, Cr, As, Hg) in Zengcheng District of Guangzhou, China. In order to overcame the problems of the data, including the limitation of abnormal values and skewness distribution and the smooth effect with the traditional kriging methods, we used sequential indicator simulation method (SISIM) to define the spatial distribution of heavy metals, and combined Hakanson index method to identify potential ecological risk area of heavy metals in farmland. The results showed that: (1) Based on the similar accuracy of spatial prediction of soil heavy metals, the SISIM had a better expression of detail rebuild than ordinary kriging in small scale area. Compared to indicator kriging, the SISIM had less error rate (4.9%-17.1%) in uncertainty evaluation of heavy-metal risk identification. The SISIM had less smooth effect and was more applicable to simulate the spatial uncertainty assessment of soil heavy metals and risk identification. (2) There was no pollution in Zengcheng's farmland. Moderate potential ecological risk was found in the southern part of study area due to enterprise production, human activities, and river sediments. This study combined the sequential indicator simulation with Hakanson risk index method, and effectively overcame the outlier information loss and smooth effect of traditional kriging method. It provided a new way to identify the soil heavy metal risk area of farmland in uneven sampling.

  20. The Sequential Probability Ratio Test: An efficient alternative to exact binomial testing for Clean Water Act 303(d) evaluation.

    PubMed

    Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry

    2017-05-01

    The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Sequential injection spectrophotometric determination of oxybenzone in lipsticks.

    PubMed

    Salvador, A; Chisvert, A; Camarasa, A; Pascual-Martí, M C; March, J G

    2001-08-01

    A sequential injection (SI) procedure for the spectrophotometric determination of oxybenzone in lipsticks is reported. The colorimetric reaction between nickel and oxybenzone was used. SI parameters such as sample solution volume, reagent solution volume, propulsion flow rate and reaction coil length were studied. The limit of detection was 3 microg ml(-1). The sensitivity was 0.0108+/-0.0002 ml microg(-1). The relative standard deviations of the results were between 6 and 12%. The real concentrations of samples and the values obtained by HPLC were comparable. Microwave sample pre-treatment allowed the extraction of oxybenzone with ethanol, thus avoiding the use of toxic organic solvents. Ethanol was also used as carrier in the SI system. Seventy-two injections per hour can be performed, which means a sample frequency of 24 h(-1) if three replicates are measured for each sample.

  2. Multi-volatile method for aroma analysis using sequential dynamic headspace sampling with an application to brewed coffee.

    PubMed

    Ochiai, Nobuo; Tsunokawa, Jun; Sasamoto, Kikuo; Hoffmann, Andreas

    2014-12-05

    A novel multi-volatile method (MVM) using sequential dynamic headspace (DHS) sampling for analysis of aroma compounds in aqueous sample was developed. The MVM consists of three different DHS method parameters sets including choice of the replaceable adsorbent trap. The first DHS sampling at 25 °C using a carbon-based adsorbent trap targets very volatile solutes with high vapor pressure (>20 kPa). The second DHS sampling at 25 °C using the same type of carbon-based adsorbent trap targets volatile solutes with moderate vapor pressure (1-20 kPa). The third DHS sampling using a Tenax TA trap at 80 °C targets solutes with low vapor pressure (<1 kPa) and/or hydrophilic characteristics. After the 3 sequential DHS samplings using the same HS vial, the three traps are sequentially desorbed with thermal desorption in reverse order of the DHS sampling and the desorbed compounds are trapped and concentrated in a programmed temperature vaporizing (PTV) inlet and subsequently analyzed in a single GC-MS run. Recoveries of the 21 test aroma compounds for each DHS sampling and the combined MVM procedure were evaluated as a function of vapor pressure in the range of 0.000088-120 kPa. The MVM provided very good recoveries in the range of 91-111%. The method showed good linearity (r2>0.9910) and high sensitivity (limit of detection: 1.0-7.5 ng mL(-1)) even with MS scan mode. The feasibility and benefit of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed coffee. Ten potent aroma compounds from top-note to base-note (acetaldehyde, 2,3-butanedione, 4-ethyl guaiacol, furaneol, guaiacol, 3-methyl butanal, 2,3-pentanedione, 2,3,5-trimethyl pyrazine, vanillin, and 4-vinyl guaiacol) could be identified together with an additional 72 aroma compounds. Thirty compounds including 9 potent aroma compounds were quantified in the range of 74-4300 ng mL(-1) (RSD<10%, n=5). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  3. Sequential growth factor application in bone marrow stromal cell ligament engineering.

    PubMed

    Moreau, Jodie E; Chen, Jingsong; Horan, Rebecca L; Kaplan, David L; Altman, Gregory H

    2005-01-01

    In vitro bone marrow stromal cell (BMSC) growth may be enhanced through culture medium supplementation, mimicking the biochemical environment in which cells optimally proliferate and differentiate. We hypothesize that the sequential administration of growth factors to first proliferate and then differentiate BMSCs cultured on silk fiber matrices will support the enhanced development of ligament tissue in vitro. Confluent second passage (P2) BMSCs obtained from purified bone marrow aspirates were seeded on RGD-modified silk matrices. Seeded matrices were divided into three groups for 5 days of static culture, with medium supplement of basic fibroblast growth factor (B) (1 ng/mL), epidermal growth factor (E; 1 ng/mL), or growth factor-free control (C). After day 5, medium supplementation was changed to transforming growth factor-beta1 (T; 5 ng/mL) or C for an additional 9 days of culture. Real-time RT-PCR, SEM, MTT, histology, and ELISA for collagen type I of all sample groups were performed. Results indicated that BT supported the greatest cell ingrowth after 14 days of culture in addition to the greatest cumulative collagen type I expression measured by ELISA. Sequential growth factor application promoted significant increases in collagen type I transcript expression from day 5 of culture to day 14, for five of six groups tested. All T-supplemented samples surpassed their respective control samples in both cell ingrowth and collagen deposition. All samples supported spindle-shaped, fibroblast cell morphology, aligning with the direction of silk fibers. These findings indicate significant in vitro ligament development after only 14 days of culture when using a sequential growth factor approach.

  4. Sensitivity comparison of sequential monadic and side-by-side presentation protocols in affective consumer testing.

    PubMed

    Colyar, Jessica M; Eggett, Dennis L; Steele, Frost M; Dunn, Michael L; Ogden, Lynn V

    2009-09-01

    The relative sensitivity of side-by-side and sequential monadic consumer liking protocols was compared. In the side-by-side evaluation, all samples were presented at once and evaluated together 1 characteristic at a time. In the sequential monadic evaluation, 1 sample was presented and evaluated on all characteristics, then returned before panelists received and evaluated another sample. Evaluations were conducted on orange juice, frankfurters, canned chili, potato chips, and applesauce. Five commercial brands, having a broad quality range, were selected as samples for each product category to assure a wide array of consumer liking scores. Without their knowledge, panelists rated the same 5 retail brands by 1 protocol and then 3 wk later by the other protocol. For 3 of the products, both protocols yielded the same order of overall liking. Slight differences in order of overall liking for the other 2 products were not significant. Of the 50 pairwise overall liking comparisons, 44 were in agreement. The different results obtained by the 2 protocols in order of liking and significance of paired comparisons were due to the experimental variation and differences in sensitivity. Hedonic liking scores were subjected to statistical power analyses and used to calculate minimum number of panelists required to achieve varying degrees of sensitivity when using side-by-side and sequential monadic protocols. In most cases, the side-by-side protocol was more sensitive, thus providing the same information with fewer panelists. Side-by-side protocol was less sensitive in cases where sensory fatigue was a factor.

  5. Designing group sequential randomized clinical trials with time to event end points using a R function.

    PubMed

    Filleron, Thomas; Gal, Jocelyn; Kramar, Andrew

    2012-10-01

    A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. Sequential solvent extraction for forms of antimony in five selected coals

    USGS Publications Warehouse

    Qi, C.; Liu, Gaisheng; Kong, Y.; Chou, C.-L.; Wang, R.

    2008-01-01

    Abundance of antimony in bulk samples has been determined in five selected coals, three coals from Huaibei Coalfield, Anhui, China, and two from the Illinois Basin in the United States. The Sb abundance in these samples is in the range of 0.11-0.43 ??g/g. The forms of Sb in coals were studied by sequential solvent extraction. The six forms of Sb are water soluble, ion changeable, organic matter bound, carbonate bound, silicate bound, and sulfide bound. Results of sequential extraction show that silicate-bound Sb is the most abundant form in these coals. Silicate- plus sulfide-bound Sb accounts for more than half of the total Sb in all coals. Bituminous coals are higher in organic matterbound Sb than anthracite and natural coke, indicating that the Sb in the organic matter may be incorporated into silicate and sulfide minerals during metamorphism. ?? 2008 by The University of Chicago. All rights reserved.

  7. Modified sequential extraction for biochar and petroleum coke: Metal release potential and its environmental implications.

    PubMed

    von Gunten, Konstantin; Alam, Md Samrat; Hubmann, Magdalena; Ok, Yong Sik; Konhauser, Kurt O; Alessi, Daniel S

    2017-07-01

    A modified Community Bureau of Reference (CBR) sequential extraction method was tested to assess the composition of untreated pyrogenic carbon (biochar) and oil sands petroleum coke. Wood biochar samples were found to contain lower concentrations of metals, but had higher fractions of easily mobilized alkaline earth and transition metals. Sewage sludge biochar was determined to be less recalcitrant and had higher total metal concentrations, with most of the metals found in the more resilient extraction fractions (oxidizable, residual). Petroleum coke was the most stable material, with a similar metal distribution pattern as the sewage sludge biochar. The applied sequential extraction method represents a suitable technique to recover metals from these materials, and is a valuable tool in understanding the metal retaining and leaching capability of various biochar types and carbonaceous petroleum coke samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Lexical diversity and omission errors as predictors of language ability in the narratives of sequential Spanish-English bilinguals: a cross-language comparison.

    PubMed

    Jacobson, Peggy F; Walden, Patrick R

    2013-08-01

    This study explored the utility of language sample analysis for evaluating language ability in school-age Spanish-English sequential bilingual children. Specifically, the relative potential of lexical diversity and word/morpheme omission as predictors of typical or atypical language status was evaluated. Narrative samples were obtained from 48 bilingual children in both of their languages using the suggested narrative retell protocol and coding conventions as per Systematic Analysis of Language Transcripts (SALT; Miller & Iglesias, 2008) software. An additional lexical diversity measure, VocD, was also calculated. A series of logistical hierarchical regressions explored the utility of the number of different words, VocD statistic, and word and morpheme omissions in each language for predicting language status. Omission errors turned out to be the best predictors of bilingual language impairment at all ages, and this held true across languages. Although lexical diversity measures did not predict typical or atypical language status, the measures were significantly related to oral language proficiency in English and Spanish. The results underscore the significance of omission errors in bilingual language impairment while simultaneously revealing the limitations of lexical diversity measures as indicators of impairment. The relationship between lexical diversity and oral language proficiency highlights the importance of considering relative language proficiency in bilingual assessment.

  9. Sequential injection titration method using second-order signals: determination of acidity in plant oils and biodiesel samples.

    PubMed

    del Río, Vanessa; Larrechi, M Soledad; Callao, M Pilar

    2010-06-15

    A new concept of flow titration is proposed and demonstrated for the determination of total acidity in plant oils and biodiesel. We use sequential injection analysis (SIA) with a diode array spectrophotometric detector linked to chemometric tools such as multivariate curve resolution-alternating least squares (MCR-ALS). This system is based on the evolution of the basic specie of an acid-base indicator, alizarine, when it comes into contact with a sample that contains free fatty acids. The gradual pH change in the reactor coil due to diffusion and reaction phenomenona allows the sequential appearance of both species of the indicator in the detector coil, recording a data matrix for each sample. The SIA-MCR-ALS method helps to reduce the amounts of sample, the reagents and the time consumed. Each determination consumes 0.413ml of sample, 0.250ml of indicator and 3ml of carrier (ethanol) and generates 3.333ml of waste. The frequency of the analysis is high (12 samples h(-1) including all steps, i.e., cleaning, preparing and analysing). The utilized reagents are of common use in the laboratory and it is not necessary to use the reagents of perfect known concentration. The method was applied to determine acidity in plant oil and biodiesel samples. Results obtained by the proposed method compare well with those obtained by the official European Community method that is time consuming and uses large amounts of organic solvents.

  10. Spatial versus sequential correlations for random access coding

    NASA Astrophysics Data System (ADS)

    Tavakoli, Armin; Marques, Breno; Pawłowski, Marcin; Bourennane, Mohamed

    2016-03-01

    Random access codes are important for a wide range of applications in quantum information. However, their implementation with quantum theory can be made in two very different ways: (i) by distributing data with strong spatial correlations violating a Bell inequality or (ii) using quantum communication channels to create stronger-than-classical sequential correlations between state preparation and measurement outcome. Here we study this duality of the quantum realization. We present a family of Bell inequalities tailored to the task at hand and study their quantum violations. Remarkably, we show that the use of spatial and sequential quantum correlations imposes different limitations on the performance of quantum random access codes: Sequential correlations can outperform spatial correlations. We discuss the physics behind the observed discrepancy between spatial and sequential quantum correlations.

  11. Correlated sequential tunneling through a double barrier for interacting one-dimensional electrons

    NASA Astrophysics Data System (ADS)

    Thorwart, M.; Egger, R.; Grifoni, M.

    2005-07-01

    The problem of resonant tunneling through a quantum dot weakly coupled to spinless Tomonaga-Luttinger liquids has been studied. We compute the linear conductance due to sequential tunneling processes upon employing a master equation approach. Besides the previously used lowest-order golden rule rates describing uncorrelated sequential tunneling processes, we systematically include higher-order correlated sequential tunneling (CST) diagrams within the standard Weisskopf-Wigner approximation. We provide estimates for the parameter regions where CST effects can be important. Focusing mainly on the temperature dependence of the peak conductance, we discuss the relation of these findings to previous theoretical and experimental results.

  12. Parasitological analyses of the male chimpanzees (Pan troglodytes schweinfurthii) at Ngogo, Kibale National Park, Uganda.

    PubMed

    Muehlenbein, Michael P

    2005-02-01

    Numerous intestinal parasites identified in populations of wild nonhuman primates can be pathogenic to humans. Furthermore, nonhuman primates are susceptible to a variety of human pathogens. Because of increasing human encroachment into previously nonimpacted forests, and the potential for disease transmission between human and nonhuman primate populations, further detailed investigations of primate ecological parasitology are warranted. For meaningful comparisons to be made, it is important for methods to be standardized across study sites. One aspect of methodological standardization is providing reliable estimates of parasite prevalence and knowing how many samples are needed to adequately estimate an individual's parasite prevalence. In this study the parasitic fauna of 37 adult, adolescent, and juvenile male chimpanzees from the Ngogo group, Kibale National Park, Uganda, were assessed from 121 fecal samples collected over a 3-month period. Twelve taxa of intestinal species (five helminth and seven protozoan) were recovered from the samples. The four most prevalent species were Troglodytella abrassarti (97.3%), Oesophagostomum sp. (81.1%), Strongyloides sp. (83.8%), and Entamoeba chattoni (70.3%). No one species was found in all samples from any one animal, and Troglodytella abrassarti, the most common intestinal organism, was found in all of the serial samples of only 69.4% of the chimpanzees. The cumulative species richness for individuals significantly increased for every sequential sample (up to three to four samples) taken per animal during this study. The results indicate that to accurately diagnose total intestinal infection and evaluate group prevalence, three to four sequential samples from each individual must be collected on nonconsecutive days. This conclusion applies only to short study periods in which possible seasonal effects are not taken into consideration. Validation of these results at different study sites in different regions with different climatic patterns is needed. Copyright 2005 Wiley-Liss, Inc.

  13. Dynamic fractionation of trace metals in soil and sediment samples using rotating coiled column extraction and sequential injection microcolumn extraction: a comparative study.

    PubMed

    Rosende, Maria; Savonina, Elena Yu; Fedotov, Petr S; Miró, Manuel; Cerdà, Víctor; Wennrich, Rainer

    2009-09-15

    Dynamic fractionation has been recognized as an appealing alternative to conventional equilibrium-based sequential extraction procedures (SEPs) for partitioning of trace elements (TE) in environmental solid samples. This paper reports the first attempt for harmonization of flow-through dynamic fractionation using two novel methods, the so-called sequential injection microcolumn (SIMC) extraction and rotating coiled column (RCC) extraction. In SIMC extraction, a column packed with the solid sample is clustered in a sequential injection system, while in RCC, the particulate matter is retained under the action of centrifugal forces. In both methods, the leachants are continuously pumped through the solid substrates by the use of either peristaltic or syringe pumps. A five-step SEP was selected for partitioning of Cu, Pb and Zn in water soluble/exchangeable, acid-soluble, easily reducible, easily oxidizable and moderately reducible fractions from 0.2 to 0.5 g samples at an extractant flow rate of 1.0 mL min(-1) prior to leachate analysis by inductively coupled plasma-atomic emission spectrometry. Similarities and discrepancies between both dynamic approaches were ascertained by fractionation of TE in certified reference materials, namely, SRM 2711 Montana Soil and GBW 07311 sediment, and two real soil samples as well. Notwithstanding the different extraction conditions set by both methods, similar trends of metal distribution were in generally found. The most critical parameters for reliable assessment of mobilizable pools of TE in worse-case scenarios are the size-distribution of sample particles, the density of particles, the content of organic matter and the concentration of major elements. For reference materials and a soil rich in organic matter, the extraction in RCC results in slightly higher recoveries of environmentally relevant fractions of TE, whereas SIMC leaching is more effective for calcareous soils.

  14. A Prospective Sequential Analysis of the Relation between Physical Aggression and Peer Rejection Acts in a High-Risk Preschool Sample

    ERIC Educational Resources Information Center

    Chen, Chin-Chih; McComas, Jennifer J.; Hartman, Ellie; Symons, Frank J.

    2011-01-01

    Research Findings: In early childhood education, the social ecology of the child is considered critical for healthy behavioral development. There is, however, relatively little information based on directly observing what children do that describes the moment-by-moment (i.e., sequential) relation between physical aggression and peer rejection acts…

  15. Sequential roles of primary somatosensory cortex and posterior parietal cortex in tactile-visual cross-modal working memory: a single-pulse transcranial magnetic stimulation (spTMS) study.

    PubMed

    Ku, Yixuan; Zhao, Di; Hao, Ning; Hu, Yi; Bodner, Mark; Zhou, Yong-Di

    2015-01-01

    Both monkey neurophysiological and human EEG studies have shown that association cortices, as well as primary sensory cortical areas, play an essential role in sequential neural processes underlying cross-modal working memory. The present study aims to further examine causal and sequential roles of the primary sensory cortex and association cortex in cross-modal working memory. Individual MRI-based single-pulse transcranial magnetic stimulation (spTMS) was applied to bilateral primary somatosensory cortices (SI) and the contralateral posterior parietal cortex (PPC), while participants were performing a tactile-visual cross-modal delayed matching-to-sample task. Time points of spTMS were 300 ms, 600 ms, 900 ms after the onset of the tactile sample stimulus in the task. The accuracy of task performance and reaction time were significantly impaired when spTMS was applied to the contralateral SI at 300 ms. Significant impairment on performance accuracy was also observed when the contralateral PPC was stimulated at 600 ms. SI and PPC play sequential and distinct roles in neural processes of cross-modal associations and working memory. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Electronic and Vibrational Spectra of InP Quantum Dots Formed by Sequential Ion Implantation

    NASA Technical Reports Server (NTRS)

    Hall, C.; Mu, R.; Tung, Y. S.; Ueda, A.; Henderson, D. O.; White, C. W.

    1997-01-01

    We have performed sequential ion implantation of indium and phosphorus into silica combined with controlled thermal annealing to fabricate InP quantum dots in a dielectric host. Electronic and vibrational spectra were measured for the as-implanted and annealed samples. The annealed samples show a peak in the infrared spectra near 320/cm which is attributed to a surface phonon mode and is in good agreement with the value calculated from Frolich's theory of surface phonon polaritons. The electronic spectra show the development of a band near 390 nm that is attributed to quantum confined InP.

  17. Age-Dependent Human Hepatic Carboxylesterase 1 (Ces1) ...

    EPA Pesticide Factsheets

    Human hepatic carboxylesterase 1 and 2 (CES1 and CES2) are important for ester- and amide- bond containing pharmaceutical and environmental chemical disposition. Despite concern regarding juvenile sensitivity to such compounds, CES1 and CES2 ontogeny has not been well characterized. To define human hepatic microsomal and cytosolic CES1 and CES2 expression during early postnatal life, microsomal and cytosolic fractions were prepared using liver samples from subjects without liver disease [N=165, 1d-18 yrs]. Proteins were fractionated, detected and quantitated by western blotting. Median microsomal CES1 was lower among samples from subjects < 3 weeks of age (N=36) compared to the rest of the population (N=126; 6.27 vs 17.5 pmoles/mg microsomal protein, respectively; p<0.001; Kruskal Wallis test). Cytosolic CES1 increased sequentially with expression being lowest among samples from individuals between birth and 3 weeks of age (N=36), markedly greater among those from ages 3 weeks to 6 years (N=90), and then modestly greater still among those over 6 years of age (N=36; median values = 4.7, 15.8, and 16.6 pmoles/mg cytosolic protein, respectively; p values <0.001 and 0.05, respectively, Kruskal Wallis test). Microsomal CES2 also increased sequentially across the same three age groups with median values of 1.8, 2.9, and 4.2 pmoles/mg microsomal protein, respectively (p<0.001, both), whereas for cytosolic CES2, only the youngest age group differed from the two older g

  18. Multilevel Sequential2 Monte Carlo for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Latz, Jonas; Papaioannou, Iason; Ullmann, Elisabeth

    2018-09-01

    The identification of parameters in mathematical models using noisy observations is a common task in uncertainty quantification. We employ the framework of Bayesian inversion: we combine monitoring and observational data with prior information to estimate the posterior distribution of a parameter. Specifically, we are interested in the distribution of a diffusion coefficient of an elliptic PDE. In this setting, the sample space is high-dimensional, and each sample of the PDE solution is expensive. To address these issues we propose and analyse a novel Sequential Monte Carlo (SMC) sampler for the approximation of the posterior distribution. Classical, single-level SMC constructs a sequence of measures, starting with the prior distribution, and finishing with the posterior distribution. The intermediate measures arise from a tempering of the likelihood, or, equivalently, a rescaling of the noise. The resolution of the PDE discretisation is fixed. In contrast, our estimator employs a hierarchy of PDE discretisations to decrease the computational cost. We construct a sequence of intermediate measures by decreasing the temperature or by increasing the discretisation level at the same time. This idea builds on and generalises the multi-resolution sampler proposed in P.S. Koutsourelakis (2009) [33] where a bridging scheme is used to transfer samples from coarse to fine discretisation levels. Importantly, our choice between tempering and bridging is fully adaptive. We present numerical experiments in 2D space, comparing our estimator to single-level SMC and the multi-resolution sampler.

  19. Spatial-dependence recurrence sample entropy

    NASA Astrophysics Data System (ADS)

    Pham, Tuan D.; Yan, Hong

    2018-03-01

    Measuring complexity in terms of the predictability of time series is a major area of research in science and engineering, and its applications are spreading throughout many scientific disciplines, where the analysis of physiological signals is perhaps the most widely reported in literature. Sample entropy is a popular measure for quantifying signal irregularity. However, the sample entropy does not take sequential information, which is inherently useful, into its calculation of sample similarity. Here, we develop a method that is based on the mathematical principle of the sample entropy and enables the capture of sequential information of a time series in the context of spatial dependence provided by the binary-level co-occurrence matrix of a recurrence plot. Experimental results on time-series data of the Lorenz system, physiological signals of gait maturation in healthy children, and gait dynamics in Huntington's disease show the potential of the proposed method.

  20. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    PubMed

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    PubMed

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  2. Two-step sequential pretreatment for the enhanced enzymatic hydrolysis of coffee spent waste.

    PubMed

    Ravindran, Rajeev; Jaiswal, Swarna; Abu-Ghannam, Nissreen; Jaiswal, Amit K

    2017-09-01

    In the present study, eight different pretreatments of varying nature (physical, chemical and physico-chemical) followed by a sequential, combinatorial pretreatment strategy was applied to spent coffee waste to attain maximum sugar yield. Pretreated samples were analysed for total reducing sugar, individual sugars and generation of inhibitory compounds such as furfural and hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity. Native spent coffee waste was high in hemicellulose content. Galactose was found to be the predominant sugar in spent coffee waste. Results showed that sequential pretreatment yielded 350.12mg of reducing sugar/g of substrate, which was 1.7-fold higher than in native spent coffee waste (203.4mg/g of substrate). Furthermore, extensive delignification was achieved using sequential pretreatment strategy. XRD, FTIR, and DSC profiles of the pretreated substrates were studied to analyse the various changes incurred in sequentially pretreated spent coffee waste as opposed to native spent coffee waste. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. EVALUATION OF VAPOR EQUILIBRATION AND IMPACT OF PURGE VOLUME ON SOIL-GAS SAMPLING RESULTS

    EPA Science Inventory

    Sequential sampling was utilized at the Raymark Superfund site to evaluate attainment of vapor equilibration and the impact of purge volume on soil-gas sample results. A simple mass-balance equation indicates that removal of three to five internal volumes of a sample system shou...

  4. Sequential injection-bead injection-lab-on-valve coupled to high-performance liquid chromatography for online renewable micro-solid-phase extraction of carbamate residues in food and environmental samples.

    PubMed

    Vichapong, Jitlada; Burakham, Rodjana; Srijaranai, Supalax; Grudpan, Kate

    2011-07-01

    A sequential injection-bead injection-lab-on-valve system was hyphenated to HPLC for online renewable micro-solid-phase extraction of carbamate insecticides. The carbamates studied were isoprocarb, methomyl, carbaryl, carbofuran, methiocarb, promecarb, and propoxur. LiChroprep(®) RP-18 beads (25-40 μm) were employed as renewable sorbent packing in a microcolumn situated inside the LOV platform mounted above the multiposition valve of the sequential injection system. The analytes sorbed by the microcolumn were eluted using 80% acetonitrile in 0.1% acetic acid before online introduction to the HPLC system. Separation was performed on an Atlantis C-18 column (4.6 × 150 mm, 5 μm) utilizing gradient elution with a flow rate of 1.0 mL/min and a detection wavelength at 270 nm. The sequential injection system offers the means of performing automated handling of sample preconcentration and matrix removal. The enrichment factors ranged between 20 and 125, leading to limits of detection (LODs) in the range of 1-20 μg/L. Good reproducibility was obtained with relative standard deviations of <0.7 and 5.4% for retention time and peak area, respectively. The developed method has been successfully applied to the determination of carbamate residues in fruit, vegetable, and water samples. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Polymeric microchip for the simultaneous determination of anions and cations by hydrodynamic injection using a dual-channel sequential injection microchip electrophoresis system.

    PubMed

    Gaudry, Adam J; Nai, Yi Heng; Guijt, Rosanne M; Breadmore, Michael C

    2014-04-01

    A dual-channel sequential injection microchip capillary electrophoresis system with pressure-driven injection is demonstrated for simultaneous separations of anions and cations from a single sample. The poly(methyl methacrylate) (PMMA) microchips feature integral in-plane contactless conductivity detection electrodes. A novel, hydrodynamic "split-injection" method utilizes background electrolyte (BGE) sheathing to gate the sample flows, while control over the injection volume is achieved by balancing hydrodynamic resistances using external hydrodynamic resistors. Injection is realized by a unique flow-through interface, allowing for automated, continuous sampling for sequential injection analysis by microchip electrophoresis. The developed system was very robust, with individual microchips used for up to 2000 analyses with lifetimes limited by irreversible blockages of the microchannels. The unique dual-channel geometry was demonstrated by the simultaneous separation of three cations and three anions in individual microchannels in under 40 s with limits of detection (LODs) ranging from 1.5 to 24 μM. From a series of 100 sequential injections the %RSDs were determined for every fifth run, resulting in %RSDs for migration times that ranged from 0.3 to 0.7 (n = 20) and 2.3 to 4.5 for peak area (n = 20). This system offers low LODs and a high degree of reproducibility and robustness while the hydrodynamic injection eliminates electrokinetic bias during injection, making it attractive for a wide range of rapid, sensitive, and quantitative online analytical applications.

  6. Kupffer cell ablation attenuates cyclooxygenase-2 expression after trauma and sepsis.

    PubMed

    Keller, Steve A; Paxian, Marcus; Lee, Sun M; Clemens, Mark G; Huynh, Toan

    2005-03-01

    Prostaglandins, synthesized by cyclooxygenase (COX), play an important role in the pathophysiology of inflammation. Severe injuries result in immunosuppression, mediated, in part, by maladaptive changes in macrophages. Herein, we assessed Kupffer cell-mediated cyclooxygenase-2 (COX-2) expression on liver function and damage after trauma and sepsis. To ablate Kupffer cells, Sprague Dawley rats were treated with gadolinium chloride (GdCl3) 48 and 24 h before experimentation. Animals then underwent femur fracture (FFx) followed 48 h later by cecal ligation and puncture (CLP). Controls received sham operations. After 24 h, liver samples were obtained, and mRNA and protein expression were determined by PCR, Western blot, and immunohistochemistry. Indocyanine-Green (ICG) clearance and plasma alanine aminotransferase (ALT) levels were determined to assess liver function and damage, respectively. One-way analysis of variance (ANOVA) with Student-Newman-Keuls test was used to assess statistical significance. After CLP alone, FFx+CLP, and GdCl3+FFx+CLP, clearance of ICG decreased. Plasma ALT levels increased in parallel with severity of injury. Kupffer cell depletion attenuated the increased ALT levels after FFx+CLP. Femur fracture alone did not alter COX-2 protein compared with sham. By contrast, COX-2 protein increased after CLP and was potentiated by sequential stress. Again, Kupffer cell depletion abrogated the increase in COX-2 after sequential stress. Immunohistochemical data confirmed COX-2 positive cells to be Kupffer cells. In this study, sequential stress increased hepatic COX-2 protein. Depletion of Kupffer cells reduced COX-2 and attenuated hepatocellular injuries. Our data suggest that Kupffer cell-dependent pathways may contribute to the inflammatory response leading to increased mortality after sequential stress.

  7. A novel approach for small sample size family-based association studies: sequential tests.

    PubMed

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  8. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    PubMed Central

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  9. Automatic sequential fluid handling with multilayer microfluidic sample isolated pumping

    PubMed Central

    Liu, Jixiao; Fu, Hai; Yang, Tianhang; Li, Songjing

    2015-01-01

    To sequentially handle fluids is of great significance in quantitative biology, analytical chemistry, and bioassays. However, the technological options are limited when building such microfluidic sequential processing systems, and one of the encountered challenges is the need for reliable, efficient, and mass-production available microfluidic pumping methods. Herein, we present a bubble-free and pumping-control unified liquid handling method that is compatible with large-scale manufacture, termed multilayer microfluidic sample isolated pumping (mμSIP). The core part of the mμSIP is the selective permeable membrane that isolates the fluidic layer from the pneumatic layer. The air diffusion from the fluidic channel network into the degassing pneumatic channel network leads to fluidic channel pressure variation, which further results in consistent bubble-free liquid pumping into the channels and the dead-end chambers. We characterize the mμSIP by comparing the fluidic actuation processes with different parameters and a flow rate range of 0.013 μl/s to 0.097 μl/s is observed in the experiments. As the proof of concept, we demonstrate an automatic sequential fluid handling system aiming at digital assays and immunoassays, which further proves the unified pumping-control and suggests that the mμSIP is suitable for functional microfluidic assays with minimal operations. We believe that the mμSIP technology and demonstrated automatic sequential fluid handling system would enrich the microfluidic toolbox and benefit further inventions. PMID:26487904

  10. Fast carotid artery MR angiography with compressed sensing based three-dimensional time-of-flight sequence.

    PubMed

    Li, Bo; Li, Hao; Dong, Li; Huang, Guofu

    2017-11-01

    In this study, we sought to investigate the feasibility of fast carotid artery MR angiography (MRA) by combining three-dimensional time-of-flight (3D TOF) with compressed sensing method (CS-3D TOF). A pseudo-sequential phase encoding order was developed for CS-3D TOF to generate hyper-intense vessel and suppress background tissues in under-sampled 3D k-space. Seven healthy volunteers and one patient with carotid artery stenosis were recruited for this study. Five sequential CS-3D TOF scans were implemented at 1, 2, 3, 4 and 5-fold acceleration factors for carotid artery MRA. Blood signal-to-tissue ratio (BTR) values for fully-sampled and under-sampled acquisitions were calculated and compared in seven subjects. Blood area (BA) was measured and compared between fully sampled acquisition and each under-sampled one. There were no significant differences between the fully-sampled dataset and each under-sampled in BTR comparisons (P>0.05 for all comparisons). The carotid vessel BAs measured from the images of CS-3D TOF sequences with 2, 3, 4 and 5-fold acceleration scans were all highly correlated with that of the fully-sampled acquisition. The contrast between blood vessels and background tissues of the images at 2 to 5-fold acceleration is comparable to that of fully sampled images. The images at 2× to 5× exhibit the comparable lumen definition to the corresponding images at 1×. By combining the pseudo-sequential phase encoding order, CS reconstruction, and 3D TOF sequence, this technique provides excellent visualizations for carotid vessel and calcifications in a short scan time. It has the potential to be integrated into current multiple blood contrast imaging protocol. Copyright © 2017. Published by Elsevier Inc.

  11. Human Cognition and Information Processing: Potential Problems for a Field Dependent Human Sequential Information Processor.

    ERIC Educational Resources Information Center

    Shaughnessy, M.; And Others

    Numerous cognitive psychologists have validated the hypothesis, originally advanced by the Russian physician, A. Luria, that different individuals process information in two distinctly different manners: simultaneously and sequentially. The importance of recognizing the existence of these two distinct styles of processing information and selecting…

  12. Structural drift: the population dynamics of sequential learning.

    PubMed

    Crutchfield, James P; Whalen, Sean

    2012-01-01

    We introduce a theory of sequential causal inference in which learners in a chain estimate a structural model from their upstream "teacher" and then pass samples from the model to their downstream "student". It extends the population dynamics of genetic drift, recasting Kimura's selectively neutral theory as a special case of a generalized drift process using structured populations with memory. We examine the diffusion and fixation properties of several drift processes and propose applications to learning, inference, and evolution. We also demonstrate how the organization of drift process space controls fidelity, facilitates innovations, and leads to information loss in sequential learning with and without memory.

  13. Sequential extraction procedure for determination of uranium, thorium, radium, lead and polonium radionuclides by alpha spectrometry in environmental samples

    NASA Astrophysics Data System (ADS)

    Oliveira, J. M.; Carvalho, F. P.

    2006-01-01

    A sequential extraction technique was developed and tested for common naturally-occurring radionuclides. This technique allows the extraction and purification of uranium, thorium, radium, lead, and polonium radionuclides from the same sample. Environmental materials such as water, soil, and biological samples can be analyzed for those radionuclides without matrix interferences in the quality of radioelement purification and in the radiochemical yield. The use of isotopic tracers (232U, 229Th, 224Ra, 209Po, and stable lead carrier) added to the sample in the beginning of the chemical procedure, enables an accurate control of the radiochemical yield for each radioelement. The ion extraction procedure, applied after either complete dissolution of the solid sample with mineral acids or co-precipitation of dissolved radionuclide with MnO2 for aqueous samples, includes the use of commercially available pre-packed columns from Eichrom® and ion exchange columns packed with Bio-Rad resins, in altogether three chromatography columns. All radioactive elements but one are purified and electroplated on stainless steel discs. Polonium is spontaneously plated on a silver disc. The discs are measured using high resolution silicon surface barrier detectors. 210Pb, a beta emitter, can be measured either through the beta emission of 210Bi, or stored for a few months and determined by alpha spectrometry through the in-growth of 210Po. This sequential extraction chromatography technique was tested and validated with the analysis of certified reference materials from the IAEA. Reproducibility was tested through repeated analysis of the same homogeneous material (water sample).

  14. Demonstration of a longitudinal concentration gradient along scala tympani by sequential sampling of perilymph from the cochlear apex.

    PubMed

    Mynatt, Robert; Hale, Shane A; Gill, Ruth M; Plontke, Stefan K; Salt, Alec N

    2006-06-01

    Local applications of drugs to the inner ear are increasingly being used to treat patients' inner ear disorders. Knowledge of the pharmacokinetics of drugs in the inner ear fluids is essential for a scientific basis for such treatments. When auditory function is of primary interest, the drug's kinetics in scala tympani (ST) must be established. Measurement of drug levels in ST is technically difficult because of the known contamination of perilymph samples taken from the basal cochlear turn with cerebrospinal fluid (CSF). Recently, we reported a technique in which perilymph was sampled from the cochlear apex to minimize the influence of CSF contamination (J. Neurosci. Methods, doi: 10.1016/j.jneumeth.2005.10.008 ). This technique has now been extended by taking smaller fluid samples sequentially from the cochlear apex, which can be used to quantify drug gradients along ST. The sampling and analysis methods were evaluated using an ionic marker, trimethylphenylammonium (TMPA), that was applied to the round window membrane. After loading perilymph with TMPA, 10 1-muL samples were taken from the cochlear apex. The TMPA content of the samples was consistent with the first sample containing perilymph from apical regions and the fourth or fifth sample containing perilymph from the basal turn. TMPA concentration decreased in subsequent samples, as they increasingly contained CSF that had passed through ST. Sample concentration curves were interpreted quantitatively by simulation of the experiment with a finite element model and by an automated curve-fitting method by which the apical-basal gradient was estimated. The study demonstrates that sequential apical sampling provides drug gradient data for ST perilymph while avoiding the major distortions of sample composition associated with basal turn sampling. The method can be used for any substance for which a sensitive assay is available and is therefore of high relevance for the development of preclinical and clinical strategies for local drug delivery to the inner ear.

  15. Demonstration of a Longitudinal Concentration Gradient Along Scala Tympani by Sequential Sampling of Perilymph from the Cochlear Apex

    PubMed Central

    Mynatt, Robert; Hale, Shane A.; Gill, Ruth M.; Plontke, Stefan K.

    2006-01-01

    ABSTRACT Local applications of drugs to the inner ear are increasingly being used to treat patients' inner ear disorders. Knowledge of the pharmacokinetics of drugs in the inner ear fluids is essential for a scientific basis for such treatments. When auditory function is of primary interest, the drug's kinetics in scala tympani (ST) must be established. Measurement of drug levels in ST is technically difficult because of the known contamination of perilymph samples taken from the basal cochlear turn with cerebrospinal fluid (CSF). Recently, we reported a technique in which perilymph was sampled from the cochlear apex to minimize the influence of CSF contamination (J. Neurosci. Methods, doi: http://10.1016/j.jneumeth.2005.10.008). This technique has now been extended by taking smaller fluid samples sequentially from the cochlear apex, which can be used to quantify drug gradients along ST. The sampling and analysis methods were evaluated using an ionic marker, trimethylphenylammonium (TMPA), that was applied to the round window membrane. After loading perilymph with TMPA, 10 1-μL samples were taken from the cochlear apex. The TMPA content of the samples was consistent with the first sample containing perilymph from apical regions and the fourth or fifth sample containing perilymph from the basal turn. TMPA concentration decreased in subsequent samples, as they increasingly contained CSF that had passed through ST. Sample concentration curves were interpreted quantitatively by simulation of the experiment with a finite element model and by an automated curve-fitting method by which the apical–basal gradient was estimated. The study demonstrates that sequential apical sampling provides drug gradient data for ST perilymph while avoiding the major distortions of sample composition associated with basal turn sampling. The method can be used for any substance for which a sensitive assay is available and is therefore of high relevance for the development of preclinical and clinical strategies for local drug delivery to the inner ear. PMID:16718612

  16. On-line crack prognosis in attachment lug using Lamb wave-deterministic resampling particle filter-based method

    NASA Astrophysics Data System (ADS)

    Yuan, Shenfang; Chen, Jian; Yang, Weibo; Qiu, Lei

    2017-08-01

    Fatigue crack growth prognosis is important for prolonging service time, improving safety, and reducing maintenance cost in many safety-critical systems, such as in aircraft, wind turbines, bridges, and nuclear plants. Combining fatigue crack growth models with the particle filter (PF) method has proved promising to deal with the uncertainties during fatigue crack growth and reach a more accurate prognosis. However, research on prognosis methods integrating on-line crack monitoring with the PF method is still lacking, as well as experimental verifications. Besides, the PF methods adopted so far are almost all sequential importance resampling-based PFs, which usually encounter sample impoverishment problems, and hence performs poorly. To solve these problems, in this paper, the piezoelectric transducers (PZTs)-based active Lamb wave method is adopted for on-line crack monitoring. The deterministic resampling PF (DRPF) is proposed to be used in fatigue crack growth prognosis, which can overcome the sample impoverishment problem. The proposed method is verified through fatigue tests of attachment lugs, which are a kind of important joint component in aerospace systems.

  17. ISSUES RELATED TO SOLUTION CHEMISTRY IN MERCURY SAMPLING IMPINGERS

    EPA Science Inventory

    Analysis of mercury (Hg) speciation in combustion flue gases is often accomplished in standardized sampling trains in which the sample is passed sequentially through a series of aqueous solutions to capture and separate oxidized Hg (Hg2+) and elemental Hg (Hgo). Such methods incl...

  18. Real-Time Continuous Identification of Greenhouse Plant Pathogens Based on Recyclable Microfluidic Bioassay System.

    PubMed

    Qu, Xiangmeng; Li, Min; Zhang, Hongbo; Lin, Chenglie; Wang, Fei; Xiao, Mingshu; Zhou, Yi; Shi, Jiye; Aldalbahi, Ali; Pei, Hao; Chen, Hong; Li, Li

    2017-09-20

    The development of a real-time continuous analytical platform for the pathogen detection is of great scientific importance for achieving better disease control and prevention. In this work, we report a rapid and recyclable microfluidic bioassay system constructed from oligonucleotide arrays for selective and sensitive continuous identification of DNA targets of fungal pathogens. We employ the thermal denaturation method to effectively regenerate the oligonucleotide arrays for multiple sample detection, which could considerably reduce the screening effort and costs. The combination of thermal denaturation and laser-induced fluorescence detection technique enables real-time continuous identification of multiple samples (<10 min per sample). As a proof of concept, we have demonstrated that two DNA targets of fungal pathogens (Botrytis cinerea and Didymella bryoniae) can be sequentially analyzed using our rapid microfluidic bioassay system, which provides a new paradigm in the design of microfluidic bioassay system and will be valuable for chemical and biomedical analysis.

  19. Multi-arm group sequential designs with a simultaneous stopping rule.

    PubMed

    Urach, S; Posch, M

    2016-12-30

    Multi-arm group sequential clinical trials are efficient designs to compare multiple treatments to a control. They allow one to test for treatment effects already in interim analyses and can have a lower average sample number than fixed sample designs. Their operating characteristics depend on the stopping rule: We consider simultaneous stopping, where the whole trial is stopped as soon as for any of the arms the null hypothesis of no treatment effect can be rejected, and separate stopping, where only recruitment to arms for which a significant treatment effect could be demonstrated is stopped, but the other arms are continued. For both stopping rules, the family-wise error rate can be controlled by the closed testing procedure applied to group sequential tests of intersection and elementary hypotheses. The group sequential boundaries for the separate stopping rule also control the family-wise error rate if the simultaneous stopping rule is applied. However, we show that for the simultaneous stopping rule, one can apply improved, less conservative stopping boundaries for local tests of elementary hypotheses. We derive corresponding improved Pocock and O'Brien type boundaries as well as optimized boundaries to maximize the power or average sample number and investigate the operating characteristics and small sample properties of the resulting designs. To control the power to reject at least one null hypothesis, the simultaneous stopping rule requires a lower average sample number than the separate stopping rule. This comes at the cost of a lower power to reject all null hypotheses. Some of this loss in power can be regained by applying the improved stopping boundaries for the simultaneous stopping rule. The procedures are illustrated with clinical trials in systemic sclerosis and narcolepsy. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  20. Preliminary results of sequential extraction experiments for selenium on mine waste and stream sediments from Vermont, Maine, and New Zealand

    USGS Publications Warehouse

    Piatak, N.M.; Seal, R.R.; Sanzolone, R.F.; Lamothe, P.J.; Brown, Z.A.

    2006-01-01

    We report the preliminary results of sequential partial dissolutions used to characterize the geochemical distribution of selenium in stream sediments, mine wastes, and flotation-mill tailings. In general, extraction schemes are designed to extract metals associated with operationally defined solid phases. Total Se concentrations and the mineralogy of the samples are also presented. Samples were obtained from the Elizabeth, Ely, and Pike Hill mines in Vermont, the Callahan mine in Maine, and the Martha mine in New Zealand. These data are presented here with minimal interpretation or discussion. Further analysis of the data will be presented elsewhere.

  1. Simultaneous detection of creatine and creatinine using a sequential injection analysis/biosensor system.

    PubMed

    Stefan-van Staden, Raluca-Ioana; Bokretsion, Rahel Girmai; van Staden, Jacobus F; Aboul-Enein, Hassan Y

    2006-01-01

    Carbon paste based biosensors for the determination of creatine and creatinine have been integrated into a sequential injection system. Applying the multi-enzyme sequence of creatininase (CA), and/or creatinase (CI) and sarcosine oxidase (SO), hydrogen peroxide has been detected amperometrically. The linear concentration ranges are of pmol/L to nmol/L magnitude, with very low limits of detection. The proposed SIA system can be utilized reliably for the on-line simultaneous detection of creatine and creatinine in pharmaceutical products, as well as in serum samples, with a rate of 34 samples per hour and RSD values better than 0.16% (n=10).

  2. Sequential Sampling Models in Cognitive Neuroscience: Advantages, Applications, and Extensions.

    PubMed

    Forstmann, B U; Ratcliff, R; Wagenmakers, E-J

    2016-01-01

    Sequential sampling models assume that people make speeded decisions by gradually accumulating noisy information until a threshold of evidence is reached. In cognitive science, one such model--the diffusion decision model--is now regularly used to decompose task performance into underlying processes such as the quality of information processing, response caution, and a priori bias. In the cognitive neurosciences, the diffusion decision model has recently been adopted as a quantitative tool to study the neural basis of decision making under time pressure. We present a selective overview of several recent applications and extensions of the diffusion decision model in the cognitive neurosciences.

  3. Radiation detection method and system using the sequential probability ratio test

    DOEpatents

    Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA

    2007-07-17

    A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.

  4. Sample size re-estimation and other midcourse adjustments with sequential parallel comparison design.

    PubMed

    Silverman, Rachel K; Ivanova, Anastasia

    2017-01-01

    Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.

  5. Sequential Measurement of Intermodal Variability in Public Transportation PM2.5 and CO Exposure Concentrations.

    PubMed

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2016-08-16

    A sequential measurement method is demonstrated for quantifying the variability in exposure concentration during public transportation. This method was applied in Hong Kong by measuring PM2.5 and CO concentrations along a route connecting 13 transportation-related microenvironments within 3-4 h. The study design takes into account ventilation, proximity to local sources, area-wide air quality, and meteorological conditions. Portable instruments were compacted into a backpack to facilitate measurement under crowded transportation conditions and to quantify personal exposure by sampling at nose level. The route included stops next to three roadside monitors to enable comparison of fixed site and exposure concentrations. PM2.5 exposure concentrations were correlated with the roadside monitors, despite differences in averaging time, detection method, and sampling location. Although highly correlated in temporal trend, PM2.5 concentrations varied significantly among microenvironments, with mean concentration ratios versus roadside monitor ranging from 0.5 for MTR train to 1.3 for bus terminal. Measured inter-run variability provides insight regarding the sample size needed to discriminate between microenvironments with increased statistical significance. The study results illustrate the utility of sequential measurement of microenvironments and policy-relevant insights for exposure mitigation and management.

  6. Numerically stable algorithm for combining census and sample estimates with the multivariate composite estimator

    Treesearch

    R. L. Czaplewski

    2009-01-01

    The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...

  7. Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser

    DOE PAGES

    Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech; ...

    2015-11-27

    In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less

  8. Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech

    In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less

  9. Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.

    PubMed

    Bonawitz, Elizabeth; Denison, Stephanie; Gopnik, Alison; Griffiths, Thomas L

    2014-11-01

    People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a "mini-microgenetic method", investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. The Magnitude, Generality, and Determinants of Flynn Effects on Forms of Declarative Memory and Visuospatial Ability: Time-Sequential Analyses of Data from a Swedish Cohort Study

    ERIC Educational Resources Information Center

    Ronnlund, Michael; Nilsson, Lars-Goran

    2008-01-01

    To estimate Flynn effects (FEs) on forms of declarative memory (episodic, semantic) and visuospatial ability (Block Design) time-sequential analyses of data for Swedish adult samples (35-80 years) assessed on either of four occasions (1989, 1994, 1999, 2004; n = 2995) were conducted. The results demonstrated cognitive gains across occasions,…

  11. Algorithms for Large-Scale Astronomical Problems

    DTIC Science & Technology

    2013-08-01

    implemented as a succession of Hadoop MapReduce jobs and sequential programs written in Java . The sampling and splitting stages are implemented as...one MapReduce job, the partitioning and clustering phases make up another job. The merging stage is implemented as a stand-alone Java program. The...Merging. The merging stage is implemented as a sequential Java program that reads the files with the shell information, which were generated by

  12. Random sequential adsorption of cubes

    NASA Astrophysics Data System (ADS)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  13. What Do Lead and Copper Sampling Protocols Mean, and Which Is Right for You?

    EPA Science Inventory

    this presentation will provide a short review of the explicit and implicit concepts behind most of the currently-used regulatory and diagnostic sampling schemes for lead, such as: random daytime sampling; automated proportional sampler; 30 minute first draw stagnation; Sequential...

  14. Leaching Properties of Naturally Occurring Heavy Metals from Soils

    NASA Astrophysics Data System (ADS)

    Zhang, M.; Hoshino, M.; Yoshikawa, M.; Hara, J.; Sugita, H.

    2014-12-01

    The major threats to human health from heavy metals are associated with exposure to arsenic, lead, cadmium, chromium, mercury, as well as some other elements. The effects of such heavy metals on human health have been extensively studied and reviewed by international organizations such as WHO. Due to their toxicity, heavy metal contaminations have been regulated by national environmental standards in many countries, and/or laws such as the Soil Contamination Countermeasures Act in Japan. Leaching of naturally occurring heavy metals from the soils, especially those around abandoned metal mines into surrounding water systems, either groundwater or surface water systems, is one of the major pathways of exposure. Therefore, understanding the leaching properties of toxic heavy metals from naturally polluted soils is of fundamentally importance for effectively managing abandoned metal mines, excavated rocks discharged from infrastructure constructions such as tunneling, and/or selecting a pertinent countermeasure against pollution when it is necessary. In this study, soil samples taken from the surroundings of abandoned metal mines in different regions in Japan were collected and analyzed. The samples contained multiple heavy metals such as lead, arsenic and chromium. Standard leaching test and sequential leaching test considering different forms of contaminants, such as trivalent and pentavalent arsenics, and trivalent and hexavalent chromiums, together with standard test for evaluating total concentration, X-ray Fluorescence Analysis (XRF), X-ray diffraction analysis (XRD) and Cation Exchange Capacity (CEC) tests were performed. In addition, sequential leaching tests were performed to evaluate long-term leaching properties of lead from representative samples. This presentation introduces the details of the above experimental study, discusses the relationships among leaching properties and chemical and mineral compositions, indicates the difficulties associated with remediation of naturally polluted sites, and emphasizes the importance of risk-based countermeasures against naturally occurring heavy metals. Keywords: Leaching properties, Control Factor, Naturally Occurring Heavy Metals, Lead, Arsenic, Chromium

  15. Rapid sequential determination of Pu, 90Sr and 241Am nuclides in environmental samples using an anion exchange and Sr-Spec resins.

    PubMed

    Lee, M H; Ahn, H J; Park, J H; Park, Y J; Song, K

    2011-02-01

    This paper presents a quantitative and rapid method of sequential separation of Pu, (90)Sr and (241)Am nuclides in environmental soil samples with an anion exchange resin and Sr Spec resin. After the sample solution was passed through an anion exchange column connected to a Sr Spec column, Pu isotopes were purified from the anion exchange column. Strontium-90 was separated from other interfering elements by the Sr Spec column. Americium-241 was purified from lanthanides by the anion exchange resin after oxalate co-precipitation. Measurement of Pu and Am isotopes was carried out using an α-spectrometer. Strontium-90 was measured by a low-level liquid scintillation counter. The radiochemical procedure of Pu, (90)Sr and (241)Am nuclides investigated in this study validated by application to IAEA reference materials and environmental soil samples. Copyright © 2010 Elsevier Ltd. All rights reserved.

  16. A Sequential Analysis of Parent-Child Interactions in Anxious and Nonanxious Families

    ERIC Educational Resources Information Center

    Williams, Sarah R.; Kertz, Sarah J.; Schrock, Matthew D.; Woodruff-Borden, Janet

    2012-01-01

    Although theoretical work has suggested that reciprocal behavior patterns between parent and child may be important in the development of childhood anxiety, most empirical work has failed to consider the bidirectional nature of interactions. The current study sought to address this limitation by utilizing a sequential approach to exploring…

  17. Investigating Stage-Sequential Growth Mixture Models with Multiphase Longitudinal Data

    ERIC Educational Resources Information Center

    Kim, Su-Young; Kim, Jee-Seon

    2012-01-01

    This article investigates three types of stage-sequential growth mixture models in the structural equation modeling framework for the analysis of multiple-phase longitudinal data. These models can be important tools for situations in which a single-phase growth mixture model produces distorted results and can allow researchers to better understand…

  18. ANALYSIS OF SEQUENTIAL FAILURES FOR ASSESSMENT OF RELIABILITY AND SAFETY OF MANUFACTURING SYSTEMS. (R828541)

    EPA Science Inventory

    Assessment of reliability and safety of a manufacturing system with sequential failures is an important issue in industry, since the reliability and safety of the system depend not only on all failed states of system components, but also on the sequence of occurrences of those...

  19. Alternatives to the sequential lineup: the importance of controlling the pictures.

    PubMed

    Lindsay, R C; Bellinger, K

    1999-06-01

    Because sequential lineups reduce false-positive choices, their use has been recommended (R. C. L. Lindsay, 1999; R. C. L. Lindsay & G. L. Wells, 1985). Blind testing is included in the recommended procedures. Police, concerned about blind testing, devised alternative procedures, including self-administered sequential lineups, to reduce use of relative judgments (G. L. Wells, 1984) while permitting the investigating officer to conduct the procedure. Identification data from undergraduates exposed to a staged crime (N = 165) demonstrated that 4 alternative identification procedures tested were less effective than the original sequential lineup. Allowing witnesses to control the photographs resulted in higher rates of false-positive identification. Self-reports of using relative judgments were shown to be postdictive of decision accuracy.

  20. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors.

    PubMed

    Scott, Anthony; Jeon, Sung-Hee; Joyce, Catherine M; Humphreys, John S; Kalb, Guyonne; Witt, Julia; Leahy, Anne

    2011-09-05

    Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors. A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost. The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias. Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population.

  1. A randomised trial and economic evaluation of the effect of response mode on response rate, response bias, and item non-response in a survey of doctors

    PubMed Central

    2011-01-01

    Background Surveys of doctors are an important data collection method in health services research. Ways to improve response rates, minimise survey response bias and item non-response, within a given budget, have not previously been addressed in the same study. The aim of this paper is to compare the effects and costs of three different modes of survey administration in a national survey of doctors. Methods A stratified random sample of 4.9% (2,702/54,160) of doctors undertaking clinical practice was drawn from a national directory of all doctors in Australia. Stratification was by four doctor types: general practitioners, specialists, specialists-in-training, and hospital non-specialists, and by six rural/remote categories. A three-arm parallel trial design with equal randomisation across arms was used. Doctors were randomly allocated to: online questionnaire (902); simultaneous mixed mode (a paper questionnaire and login details sent together) (900); or, sequential mixed mode (online followed by a paper questionnaire with the reminder) (900). Analysis was by intention to treat, as within each primary mode, doctors could choose either paper or online. Primary outcome measures were response rate, survey response bias, item non-response, and cost. Results The online mode had a response rate 12.95%, followed by the simultaneous mixed mode with 19.7%, and the sequential mixed mode with 20.7%. After adjusting for observed differences between the groups, the online mode had a 7 percentage point lower response rate compared to the simultaneous mixed mode, and a 7.7 percentage point lower response rate compared to sequential mixed mode. The difference in response rate between the sequential and simultaneous modes was not statistically significant. Both mixed modes showed evidence of response bias, whilst the characteristics of online respondents were similar to the population. However, the online mode had a higher rate of item non-response compared to both mixed modes. The total cost of the online survey was 38% lower than simultaneous mixed mode and 22% lower than sequential mixed mode. The cost of the sequential mixed mode was 14% lower than simultaneous mixed mode. Compared to the online mode, the sequential mixed mode was the most cost-effective, although exhibiting some evidence of response bias. Conclusions Decisions on which survey mode to use depend on response rates, response bias, item non-response and costs. The sequential mixed mode appears to be the most cost-effective mode of survey administration for surveys of the population of doctors, if one is prepared to accept a degree of response bias. Online surveys are not yet suitable to be used exclusively for surveys of the doctor population. PMID:21888678

  2. Heavy metal extractable forms in sludge from wastewater treatment plants.

    PubMed

    Alvarez, E Alonso; Mochón, M Callejón; Jiménez Sánchez, J C; Ternero Rodríguez, M

    2002-05-01

    The analysis of heavy metals is a very important task to assess the potential environmental and health risk associated with the sludge coming from wastewater treatment plants (WWTPs). However, it is widely accepted that the determination of total elements does not give an accurate estimation of the potential environmental impact. So, it is necessary to apply sequential extraction techniques to obtain a suitable information about their bioavailability or toxicity. In this paper, a sequential extraction scheme according to the BCR's guidelines was applied to sludge samples collected from each sludge treatment step of five municipal activated sludge plants. Al. Cd, Co, Cu, Cr, Fe, Mn, Hg, Mo, Ni, Pb, Ti and Zn were determined in the sludge extracts by inductively coupled plasma atomic emission spectrometry. In relation to current international legislation for the use of sludge for agricultural purposes none of metal concentrations exceeded maximum permitted levels. In most of the metal elements under considerations, results showed a clear rise along the sludge treatment in the proportion of two less-available fractions (oxidizable metal and residual metal).

  3. Investigation of arsenic removal in batch wise water treatments by means of sequential hydride generation flow injection analysis.

    PubMed

    Toda, Kei; Takaki, Mari; Hashem, Md Abul

    2008-08-01

    Arsenic water pollution is a big issue worldwide. Determination of inorganic arsenic in each oxidation state is important because As(III) is much more toxic than As(V). An automated arsenic measurement system was developed based on complete vaporization of As by a sequential procedure and collection/preconcentration of the vaporized AsH(3), which was subsequently measured by a flow analysis. The automated sensitive method was applied to monitoring As(III) and As(V) concentrations in contaminated water standing overnight. Behaviors of arsenics were investigated in different conditions, and unique time dependence profiles were obtained. For example, in the standing of anaerobic water samples, the As(III) concentration immediately began decreasing whereas dead time was observed in the removal of As(V). In normal groundwater conditions, most arsenic was removed from the water simply by standing overnight. To obtain more effective removal, the addition of oxidants and use of steel wools were investigated. Simple batch wise treatments of arsenic contaminated water were demonstrated, and detail of the transitional changes in As(III) and As(V) were investigated.

  4. Determination of technetium-99 in environmental samples: a review.

    PubMed

    Shi, Keliang; Hou, Xiaolin; Roos, Per; Wu, Wangsuo

    2012-01-04

    Due to the lack of a stable technetium isotope, and the high mobility and long half-life, (99)Tc is considered to be one of the most important radionuclides in safety assessment of environmental radioactivity as well as nuclear waste management. (99)Tc is also an important tracer for oceanographic research due to the high technetium solubility in seawater as TcO(4)(-). A number of analytical methods, using chemical separation combined with radiometric and mass spectrometric measurement techniques, have been developed over the past decades for determination of (99)Tc in different environmental samples. This article summarizes and compares recently reported chemical separation procedures and measurement methods for determination of (99)Tc. Due to the extremely low concentration of (99)Tc in environmental samples, the sample preparation, pre-concentration, chemical separation and purification for removal of the interferences for detection of (99)Tc are the most important issues governing the accurate determination of (99)Tc. These aspects are discussed in detail in this article. Meanwhile, the different measurement techniques for (99)Tc are also compared with respect to advantages and drawbacks. Novel automated analytical methods for rapid determination of (99)Tc using solid extraction or ion exchange chromatography for separation of (99)Tc, employing flow injection or sequential injection approaches are also discussed. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Spatial and Temporal Variability in Sediment P Distribution and Speciation in Coastal Louisiana

    NASA Astrophysics Data System (ADS)

    Bowes, K.; White, J. R.; Maiti, K.

    2017-12-01

    Excess loading of phosphorus (P) and nitrogen (N) into aquatic systems leads to degradation of water quality and diminished important ecosystem services. In the Northern Gulf of Mexico (NGOM), excess P and N loading has led to a seasonally present hypoxic area with less than 2 mg/L O2 in bottom waters, approximating 26,000 km2 in 2017. A sequential extraction (SEDEX) method was performed on surficial sediments from five different coastal and shelf sites as a function of distance from the Mississippi River mouth in the NGOM. To better quantify temporal variability in P distribution and speciation, samples were collected during both low (August) and high (May) river flow regimes. Sequential extraction techniques have been successful in separating pools of P into exchangeable or loosely sorbed P, Fe-P, Authigenic-P, Detrital-P, and Organic-P. Preliminary results suggest Authigenic-P is approximately 3-6 times more concentrated in NGOM sediments than all other P pools. Fractionation results did not show a consistent trend with sediment depth. Sediment samples had an average moisture content of 58.72% ± 12.06% and an average bulk density of 0.582 ± 0.275 g/cm3. Continued analysis of P speciation and cycling in NGOM sediments is critical in understanding the driving force behind coastal eutrophication and informing effective nutrient management strategies.

  6. Metal fractionation in olive oil and urban sewage sludges using the three-stage BCR sequential extraction method and microwave single extractions.

    PubMed

    Pérez Cid, B; Fernández Alborés, A; Fernández Gómez, E; Faliqé López, E

    2001-08-01

    The conventional three-stage BCR sequential extraction method was employed for the fractionation of heavy metals in sewage sludge samples from an urban wastewater treatment plant and from an olive oil factory. The results obtained for Cu, Cr, Ni, Pb and Zn in these samples were compared with those attained by a simplified extraction procedure based on microwave single extractions and using the same reagents as employed in each individual BCR fraction. The microwave operating conditions in the single extractions (heating time and power) were optimized for all the metals studied in order to achieve an extraction efficiency similar to that of the conventional BCR procedure. The measurement of metals in the extracts was carried out by flame atomic absorption spectrometry. The results obtained in the first and third fractions by the proposed procedure were, for all metals, in good agreement with those obtained using the BCR sequential method. Although in the reducible fraction the extraction efficiency of the accelerated procedure was inferior to that of the conventional method, the overall metals leached by both microwave single and sequential extractions were basically the same (recoveries between 90.09 and 103.7%), except for Zn in urban sewage sludges where an extraction efficiency of 87% was achieved. Chemometric analysis showed a good correlation between the results given by the two extraction methodologies compared. The application of the proposed approach to a certified reference material (CRM-601) also provided satisfactory results in the first and third fractions, as it was observed for the sludge samples analysed.

  7. Uncovering key patterns in self-harm in adolescents: Sequence analysis using the Card Sort Task for Self-harm (CaTS).

    PubMed

    Townsend, E; Wadman, R; Sayal, K; Armstrong, M; Harroe, C; Majumder, P; Vostanis, P; Clarke, D

    2016-12-01

    Self-harm is a significant clinical issue in adolescence. There is little research on the interplay of key factors in the months, weeks, days and hours leading to self-harm. We developed the Card Sort Task for Self-harm (CaTS) to investigate the pattern of thoughts, feelings, events and behaviours leading to self-harm. Forty-five young people (aged 13-21 years) with recent repeated self-harm completed the CaTS to describe their first ever/most recent self-harm episode. Lag sequential analysis determined significant transitions in factors leading to self-harm (presented in state transition diagrams). A significant sequential structure to the card sequences produced was observed demonstrating similarities and important differences in antecedents to first and most recent self-harm. Life-events were distal in the self-harm pathway and more heterogeneous. Of significant clinical concern was that the wish to die and hopelessness emerged as important antecedents in the most recent episode. First ever self-harm was associated with feeling better afterward, but this disappeared for the most recent episode. Larger sample sizes are necessary to examine longer chains of sequences and differences in genders, age and type of self-harm. The sample was self-selected with 53% having experience of living in care. The CaTs offers a systematic approach to understanding the dynamic interplay of factors that lead to self-harm in young people. It offers a method to target key points for intervention in the self-harm pathway. Crucially the factors most proximal to self-harm (negative emotions, impulsivity and access to means) are modifiable with existing clinical interventions. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  8. A sampling and classification item selection approach with content balancing.

    PubMed

    Chen, Pei-Hua

    2015-03-01

    Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.

  9. Numerical study on the sequential Bayesian approach for radioactive materials detection

    NASA Astrophysics Data System (ADS)

    Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng

    2013-01-01

    A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.

  10. Electromagnetic-induction logging to monitor changing chloride concentrations

    USGS Publications Warehouse

    Metzger, Loren F.; Izbicki, John A.

    2013-01-01

    Water from the San Joaquin Delta, having chloride concentrations up to 3590 mg/L, has intruded fresh water aquifers underlying Stockton, California. Changes in chloride concentrations at depth within these aquifers were evaluated using sequential electromagnetic (EM) induction logs collected during 2004 through 2007 at seven multiple-well sites as deep as 268 m. Sequential EM logging is useful for identifying changes in groundwater quality through polyvinyl chloride-cased wells in intervals not screened by wells. These unscreened intervals represent more than 90% of the aquifer at the sites studied. Sequential EM logging suggested degrading groundwater quality in numerous thin intervals, typically between 1 and 7 m in thickness, especially in the northern part of the study area. Some of these intervals were unscreened by wells, and would not have been identified by traditional groundwater sample collection. Sequential logging also identified intervals with improving water quality—possibly due to groundwater management practices that have limited pumping and promoted artificial recharge. EM resistivity was correlated with chloride concentrations in sampled wells and in water from core material. Natural gamma log data were used to account for the effect of aquifer lithology on EM resistivity. Results of this study show that a sequential EM logging is useful for identifying and monitoring the movement of high-chloride water, having lower salinities and chloride concentrations than sea water, in aquifer intervals not screened by wells, and that increases in chloride in water from wells in the area are consistent with high-chloride water originating from the San Joaquin Delta rather than from the underlying saline aquifer.

  11. A Mixed Methods Investigation of Mixed Methods Sampling Designs in Social and Health Science Research

    ERIC Educational Resources Information Center

    Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.

    2007-01-01

    A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…

  12. Technical Reports Prepared Under Contract N00014-76-C-0475.

    DTIC Science & Technology

    1987-05-29

    264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit

  13. Photoacoustic spectroscopy sample array vessels and photoacoustic spectroscopy methods for using the same

    DOEpatents

    Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.

    2006-02-14

    Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically positioned near the sample cells. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.

  14. Addressing Challenges in Web Accessibility for the Blind and Visually Impaired

    ERIC Educational Resources Information Center

    Guercio, Angela; Stirbens, Kathleen A.; Williams, Joseph; Haiber, Charles

    2011-01-01

    Searching for relevant information on the web is an important aspect of distance learning. This activity is a challenge for visually impaired distance learners. While sighted people have the ability to filter information in a fast and non sequential way, blind persons rely on tools that process the information in a sequential way. Learning is…

  15. Estimation of parameters and basic reproduction ratio for Japanese encephalitis transmission in the Philippines using sequential Monte Carlo filter

    USDA-ARS?s Scientific Manuscript database

    We developed a sequential Monte Carlo filter to estimate the states and the parameters in a stochastic model of Japanese Encephalitis (JE) spread in the Philippines. This method is particularly important for its adaptability to the availability of new incidence data. This method can also capture the...

  16. How Cognitive Styles Affect the Learning Behaviors of Online Problem-Solving Based Discussion Activity: A Lag Sequential Analysis

    ERIC Educational Resources Information Center

    Wu, Sheng-Yi; Hou, Huei-Tse

    2015-01-01

    Cognitive styles play an important role in influencing the learning process, but to date no relevant study has been conducted using lag sequential analysis to assess knowledge construction learning patterns based on different cognitive styles in computer-supported collaborative learning activities in online collaborative discussions. This study…

  17. Automated Registration of Sequential Breath-Hold Dynamic Contrast-Enhanced MRI Images: a Comparison of 3 Techniques

    PubMed Central

    Rajaraman, Sivaramakrishnan; Rodriguez, Jeffery J.; Graff, Christian; Altbach, Maria I.; Dragovich, Tomislav; Sirlin, Claude B.; Korn, Ronald L.; Raghunand, Natarajan

    2011-01-01

    Dynamic Contrast-Enhanced MRI (DCE-MRI) is increasingly in use as an investigational biomarker of response in cancer clinical studies. Proper registration of images acquired at different time-points is essential for deriving diagnostic information from quantitative pharmacokinetic analysis of these data. Motion artifacts in the presence of time-varying intensity due to contrast-enhancement make this registration problem challenging. DCE-MRI of chest and abdominal lesions is typically performed during sequential breath-holds, which introduces misregistration due to inconsistent diaphragm positions, and also places constraints on temporal resolution vis-à-vis free-breathing. In this work, we have employed a computer-generated DCE-MRI phantom to compare the performance of two published methods, Progressive Principal Component Registration and Pharmacokinetic Model-Driven Registration, with Sequential Elastic Registration (SER) to register adjacent time-sample images using a published general-purpose elastic registration algorithm. In all 3 methods, a 3-D rigid-body registration scheme with a mutual information similarity measure was used as a pre-processing step. The DCE-MRI phantom images were mathematically deformed to simulate misregistration which was corrected using the 3 schemes. All 3 schemes were comparably successful in registering large regions of interest (ROIs) such as muscle, liver, and spleen. SER was superior in retaining tumor volume and shape, and in registering smaller but important ROIs such as tumor core and tumor rim. The performance of SER on clinical DCE-MRI datasets is also presented. PMID:21531108

  18. Sequential inflammatory processes define human progression from M. tuberculosis infection to tuberculosis disease.

    PubMed

    Scriba, Thomas J; Penn-Nicholson, Adam; Shankar, Smitha; Hraha, Tom; Thompson, Ethan G; Sterling, David; Nemes, Elisa; Darboe, Fatoumatta; Suliman, Sara; Amon, Lynn M; Mahomed, Hassan; Erasmus, Mzwandile; Whatney, Wendy; Johnson, John L; Boom, W Henry; Hatherill, Mark; Valvo, Joe; De Groote, Mary Ann; Ochsner, Urs A; Aderem, Alan; Hanekom, Willem A; Zak, Daniel E

    2017-11-01

    Our understanding of mechanisms underlying progression from Mycobacterium tuberculosis infection to pulmonary tuberculosis disease in humans remains limited. To define such mechanisms, we followed M. tuberculosis-infected adolescents longitudinally. Blood samples from forty-four adolescents who ultimately developed tuberculosis disease (“progressors”) were compared with those from 106 matched controls, who remained healthy during two years of follow up. We performed longitudinal whole blood transcriptomic analyses by RNA sequencing and plasma proteome analyses using multiplexed slow off-rate modified DNA aptamers. Tuberculosis progression was associated with sequential modulation of immunological processes. Type I/II interferon signalling and complement cascade were elevated 18 months before tuberculosis disease diagnosis, while changes in myeloid inflammation, lymphoid, monocyte and neutrophil gene modules occurred more proximally to tuberculosis disease. Analysis of gene expression in purified T cells also revealed early suppression of Th17 responses in progressors, relative to M. tuberculosis-infected controls. This was confirmed in an independent adult cohort who received BCG re-vaccination; transcript expression of interferon response genes in blood prior to BCG administration was associated with suppression of IL-17 expression by BCG-specific CD4 T cells 3 weeks post-vaccination. Our findings provide a timeline to the different immunological stages of disease progression which comprise sequential inflammatory dynamics and immune alterations that precede disease manifestations and diagnosis of tuberculosis disease. These findings have important implications for developing diagnostics, vaccination and host-directed therapies for tuberculosis. Clincialtrials.gov, NCT01119521.

  19. Mining of high utility-probability sequential patterns from uncertain databases

    PubMed Central

    Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting

    2017-01-01

    High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847

  20. Modeling eye gaze patterns in clinician-patient interaction with lag sequential analysis.

    PubMed

    Montague, Enid; Xu, Jie; Chen, Ping-Yu; Asan, Onur; Barrett, Bruce P; Chewning, Betty

    2011-10-01

    The aim of this study was to examine whether lag sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multiuser health care settings in which trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Nonverbal communication patterns are important aspects of clinician-patient interactions and may affect patient outcomes. The eye gaze behaviors of clinicians and patients in 110 videotaped medical encounters were analyzed using the lag sequential method to identify significant behavior sequences. Lag sequential analysis included both event-based lag and time-based lag. Results from event-based lag analysis showed that the patient's gaze followed that of the clinician, whereas the clinician's gaze did not follow the patient's. Time-based sequential analysis showed that responses from the patient usually occurred within 2 s after the initial behavior of the clinician. Our data suggest that the clinician's gaze significantly affects the medical encounter but that the converse is not true. Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs.

  1. Modeling Eye Gaze Patterns in Clinician-Patient Interaction with Lag Sequential Analysis

    PubMed Central

    Montague, E; Xu, J; Asan, O; Chen, P; Chewning, B; Barrett, B

    2011-01-01

    Objective The aim of this study was to examine whether lag-sequential analysis could be used to describe eye gaze orientation between clinicians and patients in the medical encounter. This topic is particularly important as new technologies are implemented into multi-user health care settings where trust is critical and nonverbal cues are integral to achieving trust. This analysis method could lead to design guidelines for technologies and more effective assessments of interventions. Background Nonverbal communication patterns are important aspects of clinician-patient interactions and may impact patient outcomes. Method Eye gaze behaviors of clinicians and patients in 110-videotaped medical encounters were analyzed using the lag-sequential method to identify significant behavior sequences. Lag-sequential analysis included both event-based lag and time-based lag. Results Results from event-based lag analysis showed that the patients’ gaze followed that of clinicians, while clinicians did not follow patients. Time-based sequential analysis showed that responses from the patient usually occurred within two seconds after the initial behavior of the clinician. Conclusion Our data suggest that the clinician’s gaze significantly affects the medical encounter but not the converse. Application Findings from this research have implications for the design of clinical work systems and modeling interactions. Similar research methods could be used to identify different behavior patterns in clinical settings (physical layout, technology, etc.) to facilitate and evaluate clinical work system designs. PMID:22046723

  2. Using timed event sequential data in nursing research.

    PubMed

    Pecanac, Kristen E; Doherty-King, Barbara; Yoon, Ju Young; Brown, Roger; Schiefelbein, Tony

    2015-01-01

    Measuring behavior is important in nursing research, and innovative technologies are needed to capture the "real-life" complexity of behaviors and events. The purpose of this article is to describe the use of timed event sequential data in nursing research and to demonstrate the use of this data in a research study. Timed event sequencing allows the researcher to capture the frequency, duration, and sequence of behaviors as they occur in an observation period and to link the behaviors to contextual details. Timed event sequential data can easily be collected with handheld computers, loaded with a software program designed for capturing observations in real time. Timed event sequential data add considerable strength to analysis of any nursing behavior of interest, which can enhance understanding and lead to improvement in nursing practice.

  3. Multi-atlas segmentation of the cartilage in knee MR images with sequential volume- and bone-mask-based registrations

    NASA Astrophysics Data System (ADS)

    Lee, Han Sang; Kim, Hyeun A.; Kim, Hyeonjin; Hong, Helen; Yoon, Young Cheol; Kim, Junmo

    2016-03-01

    In spite of its clinical importance in diagnosis of osteoarthritis, segmentation of cartilage in knee MRI remains a challenging task due to its shape variability and low contrast with surrounding soft tissues and synovial fluid. In this paper, we propose a multi-atlas segmentation of cartilage in knee MRI with sequential atlas registrations and locallyweighted voting (LWV). First, bone is segmented by sequential volume- and object-based registrations and LWV. Second, to overcome the shape variability of cartilage, cartilage is segmented by bone-mask-based registration and LWV. In experiments, the proposed method improved the bone segmentation by reducing misclassified bone region, and enhanced the cartilage segmentation by preventing cartilage leakage into surrounding similar intensity region, with the help of sequential registrations and LWV.

  4. Environmentally friendly microwave-assisted sequential extraction method followed by ICP-OES and ion-chromatographic analysis for rapid determination of sulphur forms in coal samples.

    PubMed

    Mketo, Nomvano; Nomngongo, Philiswa N; Ngila, J Catherine

    2018-05-15

    A rapid three-step sequential extraction method was developed under microwave radiation followed by inductively coupled plasma-optical emission spectroscopic (ICP-OES) and ion-chromatographic (IC) analysis for the determination of sulphur forms in coal samples. The experimental conditions of the proposed microwave-assisted sequential extraction (MW-ASE) procedure were optimized by using multivariate mathematical tools. Pareto charts generated from 2 3 full factorial design showed that, extraction time has insignificant effect on the extraction of sulphur species, therefore, all the sequential extraction steps were performed for 5 min. The optimum values according to the central composite designs and counter plots of the response surface methodology were 200 °C (microwave temperature) and 0.1 g (coal amount) for all the investigated extracting reagents (H 2 O, HCl and HNO 3 ). When the optimum conditions of the proposed MW-ASE procedure were applied in coal CRMs, SARM 18 showed more organic sulphur (72%) and the other two coal CRMs (SARMs 19 and 20) were dominated by sulphide sulphur species (52-58%). The sum of the sulphur forms from the sequential extraction steps have shown consistent agreement (95-96%) with certified total sulphur values on the coal CRM certificates. This correlation, in addition to the good precision (1.7%) achieved by the proposed procedure, suggests that the sequential extraction method is reliable, accurate and reproducible. To safe-guard the destruction of pyritic and organic sulphur forms in extraction step 1, water was used instead of HCl. Additionally, the notorious acidic mixture (HCl/HNO 3 /HF) was replaced by greener reagent (H 2 O 2 ) in the last extraction step. Therefore, the proposed MW-ASE method can be applied in routine laboratories for the determination of sulphur forms in coal and coal related matrices. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Fractionation of metals by sequential extraction procedures (BCR and Tessier) in soil exposed to fire of wide temperature range

    NASA Astrophysics Data System (ADS)

    Fajkovic, Hana; Rončević, Sanda; Nemet, Ivan; Prohić, Esad; Leontić-Vazdar, Dana

    2017-04-01

    Forest fire presents serious problem, especially in Mediterranean Region. Effects of fire are numerous, from climate change and deforestation to loss of soil organic matter and changes in soil properties. One of the effects, not well documented, is possible redistribution and/or remobilisation of pollutants previously deposited in the soil, due to the new physical and chemical soil properties and changes in equilibrium conditions. For understanding and predicting possible redistribution and/or remobilisation of potential pollutants from soil, affected by fire different in temperature, several laboratory investigations were carried out. To evaluate the influence of organic matter on soil under fire, three soil samples were analysed and compared: (a) the one with added coniferous organic matter; (b) deciduous organic matter (b) and (c) soil without additional organic matter. Type of organic matter is closely related to pH of soil, as pH is influencing the mobility of some pollutants, e.g. metals. For that reason pH was also measured through all experimental steps. Each of mentioned soil samples (a, b and c) were heated at 1+3 different temperatures (25°C, 200°C, 500°C and 850°C). After heating, whereby fire effect on soil was simulated, samples were analysed by BCR protocol with the addition of a first step of sequential extraction procedure by Tessier and analysis of residual by aqua regia. Element fractionation of heavy metals by this procedure was used to determine the amounts of selected elements (Al, Cd, Cr, Co, Cu, Fe, Mn, Ni, Pb and Zn). Selected metal concentrations were determined using inductively coupled plasma atomic emission spectrometer. Further on, loss of organic matter was calculated after each heating procedure as well as the mineral composition. The mineral composition was determined using an X-ray diffraction. From obtained results, it can be concluded that temperature has an influence on concentration of elements in specific step of sequential extraction procedures. The first step of Tessier and BCR extraction of samples heated at 250°C and 500°C showed increasing trend of elemental concentrations. Results of these steps are especially important since they indicate mobile fraction of the elements (exchangeable, water- and acid-soluble fraction), which can easily affect the environment. Extraction procedures of samples combusted at 850°C showed that decrease in measured elemental content occurred. Some correlation is also noticed between type of organic matter, pH and concentration of analysed elements.

  6. The sequential structure of brain activation predicts skill.

    PubMed

    Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa

    2016-01-29

    In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Classical and sequential limit analysis revisited

    NASA Astrophysics Data System (ADS)

    Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi

    2018-04-01

    Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.

  8. The impact of eyewitness identifications from simultaneous and sequential lineups.

    PubMed

    Wright, Daniel B

    2007-10-01

    Recent guidelines in the US allow either simultaneous or sequential lineups to be used for eyewitness identification. This paper investigates how potential jurors weight the probative value of the different outcomes from both of these types of lineups. Participants (n=340) were given a description of a case that included some exonerating and some incriminating evidence. There was either a simultaneous or a sequential lineup. Depending on the condition, an eyewitness chose the suspect, chose a filler, or made no identification. The participant had to judge the guilt of the suspect and decide whether to render a guilty verdict. For both simultaneous and sequential lineups an identification had a large effect,increasing the probability of a guilty verdict. There were no reliable effects detected between making no identification and identifying a filler. The effect sizes were similar for simultaneous and sequential lineups. These findings are important for judges and other legal professionals to know for trials involving lineup identifications.

  9. Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure

    USGS Publications Warehouse

    Salehi, M.; Smith, D.R.

    2005-01-01

    Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.

  10. Osteomyelitis of the head and neck: sequential radionuclide scanning in diagnosis and therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, M.; Kaufman, R.A.; Baum, S.

    1985-01-01

    Sequential technetium and gallium scans of the head and neck were used to confirm the diagnosis of osteomyelitis and as an important therapeutic aid to delineate the transformation of active osteomyelitis to inactive osteomyelitis in 11 cases involving sites in the head and neck. Illustrative cases are presented of frontal sinus and cervical spine osteomyelitis and laryngeal osteochondritis.

  11. Determination of cadmium and lead in table salt by sequential multi-element flame atomic absorption spectrometry.

    PubMed

    Amorim, Fábio A C; Ferreira, Sérgio L C

    2005-02-28

    In the present paper, a simultaneous pre-concentration procedure for the sequential determination of cadmium and lead in table salt samples using flame atomic absorption spectrometry is proposed. This method is based on the liquid-liquid extraction of cadmium(II) and lead(II) ions as dithizone complexes and direct aspiration of the organic phase for the spectrometer. The sequential determination of cadmium and lead is possible using a computer program. The optimization step was performed by a two-level fractional factorial design involving the variables: pH, dithizone mass, shaking time after addition of dithizone and shaking time after addition of solvent. In the studied levels these variables are not significant. The experimental conditions established propose a sample volume of 250mL and the extraction process using 4.0mL of methyl isobutyl ketone. This way, the procedure allows determination of cadmium and lead in table salt samples with a pre-concentration factor higher than 80, and detection limits of 0.3ngg(-1) for cadmium and 4.2ngg(-1) for lead. The precision expressed as relative standard deviation (n = 10) were 5.6 and 2.6% for cadmium concentration of 2 and 20ngg(-1), respectively, and of 3.2 and 1.1% for lead concentration of 20 and 200ngg(-1), respectively. Recoveries of cadmium and lead in several samples, measured by standard addition technique, proved also that this procedure is not affected by the matrix and can be applied satisfactorily for the determination of cadmium and lead in saline samples. The method was applied for the evaluation of the concentration of cadmium and lead in table salt samples consumed in Salvador City, Bahia, Brazil.

  12. Variable criteria sequential stopping rule: Validity and power with repeated measures ANOVA, multiple correlation, MANOVA and relation to Chi-square distribution.

    PubMed

    Fitts, Douglas A

    2017-09-21

    The variable criteria sequential stopping rule (vcSSR) is an efficient way to add sample size to planned ANOVA tests while holding the observed rate of Type I errors, α o , constant. The only difference from regular null hypothesis testing is that criteria for stopping the experiment are obtained from a table based on the desired power, rate of Type I errors, and beginning sample size. The vcSSR was developed using between-subjects ANOVAs, but it should work with p values from any type of F test. In the present study, the α o remained constant at the nominal level when using the previously published table of criteria with repeated measures designs with various numbers of treatments per subject, Type I error rates, values of ρ, and four different sample size models. New power curves allow researchers to select the optimal sample size model for a repeated measures experiment. The criteria held α o constant either when used with a multiple correlation that varied the sample size model and the number of predictor variables, or when used with MANOVA with multiple groups and two levels of a within-subject variable at various levels of ρ. Although not recommended for use with χ 2 tests such as the Friedman rank ANOVA test, the vcSSR produces predictable results based on the relation between F and χ 2 . Together, the data confirm the view that the vcSSR can be used to control Type I errors during sequential sampling with any t- or F-statistic rather than being restricted to certain ANOVA designs.

  13. Sulfur K-edge XANES and acid volatile sulfide analyses of changes in chemical speciation of S and Fe during sequential extraction of trace metals in anoxic sludge from biogas reactors.

    PubMed

    Shakeri Yekta, Sepehr; Gustavsson, Jenny; Svensson, Bo H; Skyllberg, Ulf

    2012-01-30

    The effect of sequential extraction of trace metals on sulfur (S) speciation in anoxic sludge samples from two lab-scale biogas reactors augmented with Fe was investigated. Analyses of sulfur K-edge X-ray absorption near edge structure (S XANES) spectroscopy and acid volatile sulfide (AVS) were conducted on the residues from each step of the sequential extraction. The S speciation in sludge samples after AVS analysis was also determined by S XANES. Sulfur was mainly present as FeS (≈ 60% of total S) and reduced organic S (≈ 30% of total S), such as organic sulfide and thiol groups, in the anoxic solid phase. Sulfur XANES and AVS analyses showed that during first step of the extraction procedure (the removal of exchangeable cations), a part of the FeS fraction corresponding to 20% of total S was transformed to zero-valent S, whereas Fe was not released into the solution during this transformation. After the last extraction step (organic/sulfide fraction) a secondary Fe phase was formed. The change in chemical speciation of S and Fe occurring during sequential extraction procedure suggests indirect effects on trace metals associated to the FeS fraction that may lead to incorrect results. Furthermore, by S XANES it was verified that the AVS analysis effectively removed the FeS fraction. The present results identified critical limitations for the application of sequential extraction for trace metal speciation analysis outside the framework for which the methods were developed. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. A comparison of sequential and spiral scanning techniques in brain CT.

    PubMed

    Pace, Ivana; Zarb, Francis

    2015-01-01

    To evaluate and compare image quality and radiation dose of sequential computed tomography (CT) examinations of the brain and spiral CT examinations of the brain imaged on a GE HiSpeed NX/I Dual Slice 2CT scanner. A random sample of 40 patients referred for CT examination of the brain was selected and divided into 2 groups. Half of the patients were scanned using the sequential technique; the other half were scanned using the spiral technique. Radiation dose data—both the computed tomography dose index (CTDI) and the dose length product (DLP)—were recorded on a checklist at the end of each examination. Using the European Guidelines on Quality Criteria for Computed Tomography, 4 radiologists conducted a visual grading analysis and rated the level of visibility of 6 anatomical structures considered necessary to produce images of high quality. The mean CTDI(vol) and DLP values were statistically significantly higher (P <.05) with the sequential scans (CTDI(vol): 22.06 mGy; DLP: 304.60 mGy • cm) than with the spiral scans (CTDI(vol): 14.94 mGy; DLP: 229.10 mGy • cm). The mean image quality rating scores for all criteria of the sequential scanning technique were statistically significantly higher (P <.05) in the visual grading analysis than those of the spiral scanning technique. In this local study, the sequential technique was preferred over the spiral technique for both overall image quality and differentiation between gray and white matter in brain CT scans. Other similar studies counter this finding. The radiation dose seen with the sequential CT scanning technique was significantly higher than that seen with the spiral CT scanning technique. However, image quality with the sequential technique was statistically significantly superior (P <.05).

  15. The use of sequential extraction to evaluate the remediation potential of heavy metals from contaminated harbour sediment

    NASA Astrophysics Data System (ADS)

    Nystrøm, G. M.; Ottosen, L. M.; Villumsen, A.

    2003-05-01

    In this work sequential extraction is performed with harbour sediment in order to evaluate the electrodialytic remediation potential for harbour sediments. Sequential extraction was performed on a sample of Norwegian harbour sediment; with the original sediment and after the sediment was treated with acid. The results from the sequential extraction show that 75% Zn and Pb and about 50% Cu are found in the most mobile phases in the original sediment and more than 90% Zn and Pb and 75% Cu are found in the most mobile phase in the sediment treated with acid. Electrodialytic remediation experiments were made. The method uses a low direct current as cleaning agent, removing the heavy metals towards the anode and cathode according to the charge of the heavy metals in the electric field. The electrodialytic experiments show that up to 50% Cu, 85% Zn and 60% Pb can be removed after 20 days. Thus, there is still a potential for a higher removal, with some changes in the experimental set-up and longer remediation time. The experiments show that thc use of sequential extraction can be used to predict the electrodialytic remediation potential for harbour sediments.

  16. Investigation of Mercury Wet Deposition Physicochemistry in the Ohio River Valley through Automated Sequential Sampling

    EPA Science Inventory

    Intra-storm variability and soluble fractionation was explored for summer-time rain events in Steubenville, Ohio to evaluate the physical processes controlling mercury (Hg) in wet deposition in this industrialized region. Comprehensive precipitation sample collection was conducte...

  17. Use of High-Resolution Continuum Source Flame Atomic Absorption Spectrometry (HR-CS FAAS) for Sequential Multi-Element Determination of Metals in Seawater and Wastewater Samples

    NASA Astrophysics Data System (ADS)

    Peña-Vázquez, E.; Barciela-Alonso, M. C.; Pita-Calvo, C.; Domínguez-González, R.; Bermejo-Barrera, P.

    2015-09-01

    The objective of this work is to develop a method for the determination of metals in saline matrices using high-resolution continuum source flame atomic absorption spectrometry (HR-CS FAAS). Module SFS 6 for sample injection was used in the manual mode, and flame operating conditions were selected. The main absorption lines were used for all the elements, and the number of selected analytical pixels were 5 (CP±2) for Cd, Cu, Fe, Ni, Pb and Zn, and 3 pixels for Mn (CP±1). Samples were acidified (0.5% (v/v) nitric acid), and the standard addition method was used for the sequential determination of the analytes in diluted samples (1:2). The method showed good precision (RSD(%) < 4%, except for Pb (6.5%)) and good recoveries. Accuracy was checked after the analysis of an SPS-WW2 wastewater reference material diluted with synthetic seawater (dilution 1:2), showing a good agreement between certified and experimental results.

  18. Hybrid Model Predictive Control for Sequential Decision Policies in Adaptive Behavioral Interventions.

    PubMed

    Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S

    2014-06-01

    Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.

  19. The Development of Molybdenum Speciation as a Paleoredox Tool

    NASA Astrophysics Data System (ADS)

    Rodley, J.; Peacock, C.; Mosselmans, J. F. W.; Poulton, S.

    2017-12-01

    The redox state of the oceans has changed throughout geological time and an understanding of these changes is essential to elucidate links between ocean chemistry, climate and life. Due to its abundance in seawater and redox-sensitive nature, molybdenum has enormous potential as a paleoredox proxy. Although a significant amount of research has been done on molybdenum in ancient and modern sediments in terms of its concentrations and isotopic ratios there remains a limited understanding of the drawdown mechanisms of molybdenum under different redox conditions restricting its use in identifying a range of redox states. In order to address these uncertainties, we have developed a novel sequential extraction technique to examine molybdenum concentrations in six sediment fractions from modern samples that represent oxic, nitrogenous, ferruginous and euxinic environments. In addition we use µ-XRF and µ-XANES synchrotron spectroscopy to examine the molybdenum speciation within these fractions and environments. To interpret our µ-XANES data we have developed an extensive library of molybdenum XANES standards that represent molybdenum sequestration by the sediment fractions identified from the sequential extraction. To further verify our synchrotron results we developed a series of µ-XANES micro-column experiments to examine preferential uptake pathways of molybdenum to different sediment phases under a euxinic water column. The initial data from both the sequential extraction and µ-XANES methods indicate that molybdenum is not limited to a single burial pathway in any of the redox environments. We find that each of the redox environments can be characterised by a limited set of molybdenum phase associations, with molybdenum adsorption to pyrite likely the dominant burial pathway. These findings agree with existing research for molybdenum speciation in euxinic environments suggesting that both pyrite and sulphidised organic matter act as important molybdenum sinks. Our new research shows that pyrite is also an important sink for molybdenum in other redox environments.

  20. Structural characterization of polysaccharides from bamboo

    NASA Astrophysics Data System (ADS)

    Kamil, Ruzaimah Nik Mohamad; Yusuf, Nur'aini Raman; Yunus, Normawati M.; Yusup, Suzana

    2014-10-01

    The alkaline and water soluble polysaccharides were isolate by sequential extractions with distilled water, 60% ethanol containing 1%, 5% and 8% NaOH. The samples were prepared at 60 °C for 3 h from local bamboo. The functional group of the sample were examined using FTIR analysis. The most precipitate obtained is from using 60% ethanol containing 8% NaOH with yield of 2.6%. The former 3 residues isolated by sequential extractions with distilled water, 60% ethanol containing 1% and 5% NaOH are barely visible after filtering with cellulose filter paper. The FTIR result showed that the water-soluble polysaccharides consisted mainly of OH group, CH group, CO indicates the carbohydrate and sugar chain. The sample weight loss was slightly decreased with increasing of temperature.

  1. Sequential capillary electrophoresis analysis using optically gated sample injection and UV/vis detection.

    PubMed

    Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li

    2015-10-01

    We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity

    DOE PAGES

    Gordiz, Kiarash; Singh, David J.; Henry, Asegun

    2015-01-29

    In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less

  3. Some sequential, distribution-free pattern classification procedures with applications

    NASA Technical Reports Server (NTRS)

    Poage, J. L.

    1971-01-01

    Some sequential, distribution-free pattern classification techniques are presented. The decision problem to which the proposed classification methods are applied is that of discriminating between two kinds of electroencephalogram responses recorded from a human subject: spontaneous EEG and EEG driven by a stroboscopic light stimulus at the alpha frequency. The classification procedures proposed make use of the theory of order statistics. Estimates of the probabilities of misclassification are given. The procedures were tested on Gaussian samples and the EEG responses.

  4. Low-dose cerebral perfusion computed tomography image restoration via low-rank and total variation regularizations

    PubMed Central

    Niu, Shanzhou; Zhang, Shanli; Huang, Jing; Bian, Zhaoying; Chen, Wufan; Yu, Gaohang; Liang, Zhengrong; Ma, Jianhua

    2016-01-01

    Cerebral perfusion x-ray computed tomography (PCT) is an important functional imaging modality for evaluating cerebrovascular diseases and has been widely used in clinics over the past decades. However, due to the protocol of PCT imaging with repeated dynamic sequential scans, the associative radiation dose unavoidably increases as compared with that used in conventional CT examinations. Minimizing the radiation exposure in PCT examination is a major task in the CT field. In this paper, considering the rich similarity redundancy information among enhanced sequential PCT images, we propose a low-dose PCT image restoration model by incorporating the low-rank and sparse matrix characteristic of sequential PCT images. Specifically, the sequential PCT images were first stacked into a matrix (i.e., low-rank matrix), and then a non-convex spectral norm/regularization and a spatio-temporal total variation norm/regularization were then built on the low-rank matrix to describe the low rank and sparsity of the sequential PCT images, respectively. Subsequently, an improved split Bregman method was adopted to minimize the associative objective function with a reasonable convergence rate. Both qualitative and quantitative studies were conducted using a digital phantom and clinical cerebral PCT datasets to evaluate the present method. Experimental results show that the presented method can achieve images with several noticeable advantages over the existing methods in terms of noise reduction and universal quality index. More importantly, the present method can produce more accurate kinetic enhanced details and diagnostic hemodynamic parameter maps. PMID:27440948

  5. Group sequential designs for stepped-wedge cluster randomised trials

    PubMed Central

    Grayling, Michael J; Wason, James MS; Mander, Adrian P

    2017-01-01

    Background/Aims: The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Methods: Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. Results: We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial’s type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. Conclusion: The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial. PMID:28653550

  6. Group sequential designs for stepped-wedge cluster randomised trials.

    PubMed

    Grayling, Michael J; Wason, James Ms; Mander, Adrian P

    2017-10-01

    The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into stepped-wedge cluster randomised trials according to the needs of the particular trial.

  7. Simplified pupal surveys of Aedes aegypti (L.) for entomologic surveillance and dengue control.

    PubMed

    Barrera, Roberto

    2009-07-01

    Pupal surveys of Aedes aegypti (L.) are useful indicators of risk for dengue transmission, although sample sizes for reliable estimations can be large. This study explores two methods for making pupal surveys more practical yet reliable and used data from 10 pupal surveys conducted in Puerto Rico during 2004-2008. The number of pupae per person for each sampling followed a negative binomial distribution, thus showing aggregation. One method found a common aggregation parameter (k) for the negative binomial distribution, a finding that enabled the application of a sequential sampling method requiring few samples to determine whether the number of pupae/person was above a vector density threshold for dengue transmission. A second approach used the finding that the mean number of pupae/person is correlated with the proportion of pupa-infested households and calculated equivalent threshold proportions of pupa-positive households. A sequential sampling program was also developed for this method to determine whether observed proportions of infested households were above threshold levels. These methods can be used to validate entomological thresholds for dengue transmission.

  8. Parallelization of sequential Gaussian, indicator and direct simulation algorithms

    NASA Astrophysics Data System (ADS)

    Nunes, Ruben; Almeida, José A.

    2010-08-01

    Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.

  9. Spatial distribution and sequential sampling plans for Tuta absoluta (Lepidoptera: Gelechiidae) in greenhouse tomato crops.

    PubMed

    Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino

    2015-09-01

    The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of <0.10. The enumerative sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.

  10. Towards efficient multi-scale methods for monitoring sugarcane aphid infestations in sorghum

    USDA-ARS?s Scientific Manuscript database

    We discuss approaches and issues involved with developing optimal monitoring methods for sugarcane aphid infestations (SCA) in grain sorghum. We discuss development of sequential sampling methods that allow for estimation of the number of aphids per sample unit, and statistical decision making rela...

  11. Precipitation as a chemical and meteorological phenomenon

    Treesearch

    Francis J. Berlandi; Donald G. Muldoon; Harvey S. Rosenblum; Lloyd L. Schulman

    1976-01-01

    Sequential rain and snow sampling has been performed at Burlington and Concord, Massachusetts. The samples have been collected during 1974 and 1975 in one-quarter inch and one inch rain equivalents and chemical analysis performed on the aliquotes. Meteorological data was documented at the time of collection.

  12. A sequential test for assessing observed agreement between raters.

    PubMed

    Bersimis, Sotiris; Sachlas, Athanasios; Chakraborti, Subha

    2018-01-01

    Assessing the agreement between two or more raters is an important topic in medical practice. Existing techniques, which deal with categorical data, are based on contingency tables. This is often an obstacle in practice as we have to wait for a long time to collect the appropriate sample size of subjects to construct the contingency table. In this paper, we introduce a nonparametric sequential test for assessing agreement, which can be applied as data accrues, does not require a contingency table, facilitating a rapid assessment of the agreement. The proposed test is based on the cumulative sum of the number of disagreements between the two raters and a suitable statistic representing the waiting time until the cumulative sum exceeds a predefined threshold. We treat the cases of testing two raters' agreement with respect to one or more characteristics and using two or more classification categories, the case where the two raters extremely disagree, and finally the case of testing more than two raters' agreement. The numerical investigation shows that the proposed test has excellent performance. Compared to the existing methods, the proposed method appears to require significantly smaller sample size with equivalent power. Moreover, the proposed method is easily generalizable and brings the problem of assessing the agreement between two or more raters and one or more characteristics under a unified framework, thus providing an easy to use tool to medical practitioners. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. The Effects of Evidence Bounds on Decision-Making: Theoretical and Empirical Developments

    PubMed Central

    Zhang, Jiaxiang

    2012-01-01

    Converging findings from behavioral, neurophysiological, and neuroimaging studies suggest an integration-to-boundary mechanism governing decision formation and choice selection. This mechanism is supported by sequential sampling models of choice decisions, which can implement statistically optimal decision strategies for selecting between multiple alternative options on the basis of sensory evidence. This review focuses on recent developments in understanding the evidence boundary, an important component of decision-making raised by experimental findings and models. The article starts by reviewing the neurobiology of perceptual decisions and several influential sequential sampling models, in particular the drift-diffusion model, the Ornstein–Uhlenbeck model and the leaky-competing-accumulator model. In the second part, the article examines how the boundary may affect a model’s dynamics and performance and to what extent it may improve a model’s fits to experimental data. In the third part, the article examines recent findings that support the presence and site of boundaries in the brain. The article considers two questions: (1) whether the boundary is a spontaneous property of neural integrators, or is controlled by dedicated neural circuits; (2) if the boundary is variable, what could be the driving factors behind boundary changes? The review brings together studies using different experimental methods in seeking answers to these questions, highlights psychological and physiological factors that may be associated with the boundary and its changes, and further considers the evidence boundary as a generic mechanism to guide complex behavior. PMID:22870070

  14. Rapid Sequential in Situ Multiplexing with DNA Exchange Imaging in Neuronal Cells and Tissues.

    PubMed

    Wang, Yu; Woehrstein, Johannes B; Donoghue, Noah; Dai, Mingjie; Avendaño, Maier S; Schackmann, Ron C J; Zoeller, Jason J; Wang, Shan Shan H; Tillberg, Paul W; Park, Demian; Lapan, Sylvain W; Boyden, Edward S; Brugge, Joan S; Kaeser, Pascal S; Church, George M; Agasti, Sarit S; Jungmann, Ralf; Yin, Peng

    2017-10-11

    To decipher the molecular mechanisms of biological function, it is critical to map the molecular composition of individual cells or even more importantly tissue samples in the context of their biological environment in situ. Immunofluorescence (IF) provides specific labeling for molecular profiling. However, conventional IF methods have finite multiplexing capabilities due to spectral overlap of the fluorophores. Various sequential imaging methods have been developed to circumvent this spectral limit but are not widely adopted due to the common limitation of requiring multirounds of slow (typically over 2 h at room temperature to overnight at 4 °C in practice) immunostaining. We present here a practical and robust method, which we call DNA Exchange Imaging (DEI), for rapid in situ spectrally unlimited multiplexing. This technique overcomes speed restrictions by allowing for single-round immunostaining with DNA-barcoded antibodies, followed by rapid (less than 10 min) buffer exchange of fluorophore-bearing DNA imager strands. The programmability of DEI allows us to apply it to diverse microscopy platforms (with Exchange Confocal, Exchange-SIM, Exchange-STED, and Exchange-PAINT demonstrated here) at multiple desired resolution scales (from ∼300 nm down to sub-20 nm). We optimized and validated the use of DEI in complex biological samples, including primary neuron cultures and tissue sections. These results collectively suggest DNA exchange as a versatile, practical platform for rapid, highly multiplexed in situ imaging, potentially enabling new applications ranging from basic science, to drug discovery, and to clinical pathology.

  15. Radiochemical determination of 241Am and Pu(alpha) in environmental materials.

    PubMed

    Warwick, P E; Croudace, I W; Oh, J S

    2001-07-15

    Americium-241 and plutonium determinations will become of greater importance over the coming decades as 137Cs and 241Pu decay. The impact of 137Cs on environmental chronology has been great, but its potency is waning as it decays and diffuses. Having 241Am and Pu as unequivocal markers for the 1963 weapon fallout maximum is important for short time scale environmental work, but a fast and reliable procedure is required for their separation. The developed method described here begins by digesting samples using a lithium borate fusion although an aqua regia leachate is also effective in many instances. Isolation of the Am and Pu is then achieved using a combination of extraction chromatography and conventional anion exchange chromatography. The whole procedure has been optimized, validated, and assessed for safety. The straightforwardness of this technique permits the analysis of large numbers of samples and makes 241Am-based techniques for high-resolution sediment accumulation rate studies attractive. In addition, the technique can be employed for the sequential measurement of Pu and Am in environmental surveillance programs, potentially reducing analytical costs and turnround times.

  16. Metabolic routes along digestive system of licorice: multicomponent sequential metabolism method in rat.

    PubMed

    Zhang, Lei; Zhao, Haiyu; Liu, Yang; Dong, Honghuan; Lv, Beiran; Fang, Min; Zhao, Huihui

    2016-06-01

    This study was conducted to establish the multicomponent sequential metabolism (MSM) method based on comparative analysis along the digestive system following oral administration of licorice (Glycyrrhiza uralensis Fisch., leguminosae), a traditional Chinese medicine widely used for harmonizing other ingredients in a formulae. The licorice water extract (LWE) dissolved in Krebs-Ringer buffer solution (1 g/mL) was used to carry out the experiments and the comparative analysis was performed using HPLC and LC-MS/MS methods. In vitro incubation, in situ closed-loop and in vivo blood sampling were used to measure the LWE metabolic profile along the digestive system. The incubation experiment showed that the LWE was basically stable in digestive juice. A comparative analysis presented the metabolic profile of each prototype and its corresponding metabolites then. Liver was the major metabolic organ for LWE, and the metabolism by the intestinal flora and gut wall was also an important part of the process. The MSM method was practical and could be a potential method to describe the metabolic routes of multiple components before absorption into the systemic blood stream. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  17. Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm.

    PubMed

    Arber, Madeleine M; Ireland, Michael J; Feger, Roy; Marrington, Jessica; Tehan, Joshua; Tehan, Gerald

    2017-01-01

    Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource) models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration). Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors) as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research.

  18. Ego Depletion in Real-Time: An Examination of the Sequential-Task Paradigm

    PubMed Central

    Arber, Madeleine M.; Ireland, Michael J.; Feger, Roy; Marrington, Jessica; Tehan, Joshua; Tehan, Gerald

    2017-01-01

    Current research into self-control that is based on the sequential task methodology is currently at an impasse. The sequential task methodology involves completing a task that is designed to tax self-control resources which in turn has carry-over effects on a second, unrelated task. The current impasse is in large part due to the lack of empirical research that tests explicit assumptions regarding the initial task. Five studies test one key, untested assumption underpinning strength (finite resource) models of self-regulation: Performance will decline over time on a task that depletes self-regulatory resources. In the aftermath of high profile replication failures using a popular letter-crossing task and subsequent criticisms of that task, the current studies examined whether depletion effects would occur in real time using letter-crossing tasks that did not invoke habit-forming and breaking, and whether these effects were moderated by administration type (paper and pencil vs. computer administration). Sample makeup and sizes as well as response formats were also varied across the studies. The five studies yielded a clear and consistent pattern of increasing performance deficits (errors) as a function of time spent on task with generally large effects and in the fifth study the strength of negative transfer effects to a working memory task were related to individual differences in depletion. These results demonstrate that some form of depletion is occurring on letter-crossing tasks though whether an internal regulatory resource reservoir or some other factor is changing across time remains an important question for future research. PMID:29018390

  19. Monitoring variations of dimethyl sulfide and dimethylsulfoniopropionate in seawater and the atmosphere based on sequential vapor generation and ion molecule reaction mass spectrometry.

    PubMed

    Iyadomi, Satoshi; Ezoe, Kentaro; Ohira, Shin-Ichi; Toda, Kei

    2016-04-01

    To monitor the fluctuations of dimethyl sulfur compounds at the seawater/atmosphere interface, an automated system was developed based on sequential injection analysis coupled with vapor generation-ion molecule reaction mass spectrometry (SIA-VG-IMRMS). Using this analytical system, dissolved dimethyl sulfide (DMS(aq)) and dimethylsulfoniopropionate (DMSP), a precursor to DMS in seawater, were monitored together sequentially with atmospheric dimethyl sulfide (DMS(g)). A shift from the equilibrium point between DMS(aq) and DMS(g) results in the emission of DMS to the atmosphere. Atmospheric DMS emitted from seawater plays an important role as a source of cloud condensation nuclei, which influences the oceanic climate. Water samples were taken periodically and dissolved DMS(aq) was vaporized for analysis by IMRMS. After that, DMSP was hydrolyzed to DMS and acrylic acid, and analyzed in the same manner as DMS(aq). The vaporization behavior and hydrolysis of DMSP to DMS were investigated to optimize these conditions. Frequent (every 30 min) determination of the three components, DMS(aq)/DMSP (nanomolar) and DMS(g) (ppbv), was carried out by SIA-VG-IMRMS. Field analysis of the dimethyl sulfur compounds was undertaken at a coastal station, which succeeded in showing detailed variations of the compounds in a natural setting. Observed concentrations of the dimethyl sulfur compounds both in the atmosphere and seawater largely changed with time and similar variations were repeatedly observed over several days, suggesting diurnal variations in the DMS flux at the seawater/atmosphere interface.

  20. A Bayesian sequential processor approach to spectroscopic portal system decisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sale, K; Candy, J; Breitfeller, E

    The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waitingmore » for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.« less

  1. The distribution of individual cabinet positions in coalition governments: A sequential approach

    PubMed Central

    Meyer, Thomas M.; Müller, Wolfgang C.

    2015-01-01

    Abstract Multiparty government in parliamentary democracies entails bargaining over the payoffs of government participation, in particular the allocation of cabinet positions. While most of the literature deals with the numerical distribution of cabinet seats among government parties, this article explores the distribution of individual portfolios. It argues that coalition negotiations are sequential choice processes that begin with the allocation of those portfolios most important to the bargaining parties. This induces conditionality in the bargaining process as choices of individual cabinet positions are not independent of each other. Linking this sequential logic with party preferences for individual cabinet positions, the authors of the article study the allocation of individual portfolios for 146 coalition governments in Western and Central Eastern Europe. The results suggest that a sequential logic in the bargaining process results in better predictions than assuming mutual independence in the distribution of individual portfolios. PMID:27546952

  2. Sequential patterns of essential trace elements composition in Gracilaria verrucosa and its generated products

    NASA Astrophysics Data System (ADS)

    Izzati, Munifatul; Haryanti, Sri; Parman, Sarjana

    2018-05-01

    Gracilaria widely known as a source of essential trace elements. However this red seaweeds also has great potential for being developed into commercial products. This study examined the sequential pattern of essential trace elements composition in fresh Gracilaria verrucosa and a selection of its generated products, nemely extracted agar, Gracilaria salt and Gracilaria residue. The sample was collected from a brackish water pond, located in north part Semarang, Central Java. The collected sample was then dried under the sun, and subsequently processed into aformentioned generated products. The Gracilaria salt was obtain by soaking the sun dried Gracilaria overnight in fresh water overnight. The resulted salt solution was then boiled leaving crystal salt. Extracted agar was obtained with alkali agar extraction method. The rest of remaining material was considered as Gracilaria residue. The entire process was repeated 3 times. The compositin of trace elements was examined using ICP-MS Spectrometry. Collected data was then analyzed by ANOVA single factor. Resulting sequential pattern of its essential trace elements composition was compared. A regular table salt was used as controls. Resuts from this study revealed that Gracilaria verrucosa and its all generated products all have similarly patterned the composition of essential trace elements, where Mn>Zn>Cu>Mo. Additionally this pattern is similar to different subspecies of Gracilaria from different location and and different season. However, Gracilaria salt has distinctly different pattern of sequential essential trace elements composition compared to table salt.

  3. Assessing the effect of sodium dichloroisocyanurate concentration on transfer of Salmonella enterica serotype Typhimurium in wash water for production of minimally processed iceberg lettuce (Lactuca sativa L.).

    PubMed

    Maffei, D F; Sant'Ana, A S; Monteiro, G; Schaffner, D W; Franco, B D G M

    2016-06-01

    This study evaluated the impact of sodium dichloroisocyanurate (5, 10, 20, 30, 40, 50 and 250 mg l(-1) ) in wash water on transfer of Salmonella Typhimurium from contaminated lettuce to wash water and then to other noncontaminated lettuces washed sequentially in the same water. Experiments were designed mimicking the conditions commonly seen in minimally processed vegetable (MPV) processing plants in Brazil. The scenarios were as follows: (1) Washing one inoculated lettuce portion in nonchlorinated water, followed by washing 10 noninoculated portions sequentially. (2) Washing one inoculated lettuce portion in chlorinated water followed by washing five noninoculated portions sequentially. (3) Washing five inoculated lettuce portions in chlorinated water sequentially, followed by washing five noninoculated portions sequentially. (4) Washing five noninoculated lettuce portions in chlorinated water sequentially, followed by washing five inoculated portions sequentially and then by washing five noninoculated portions sequentially in the same water. Salm. Typhimurium transfer from inoculated lettuce to wash water and further dissemination to noninoculated lettuces occurred when nonchlorinated water was used (scenario 1). When chlorinated water was used (scenarios 2, 3 and 4), no measurable Salm. Typhimurium transfer occurred if the sanitizer was ≥10 mg l(-1) . Use of sanitizers in correct concentrations is important to minimize the risk of microbial transfer during MPV washing. In this study, the impact of sodium dichloroisocyanurate in the wash water on transfer of Salmonella Typhimurium from inoculated lettuce to wash water and then to other noninoculated lettuces washed sequentially in the same water was evaluated. The use of chlorinated water, at concentration above 10 mg l(-1) , effectively prevented Salm. Typhimurium transfer under several different washing scenarios. Conversely, when nonchlorinated water was used, Salm. Typhimurium transfer occurred in up to at least 10 noninoculated batches of lettuce washed sequentially in the same water. © 2016 The Society for Applied Microbiology.

  4. Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.

    ERIC Educational Resources Information Center

    Glutting, Joseph J.; McDermott, Paul A.

    1990-01-01

    Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…

  5. Depression and Delinquency Covariation in an Accelerated Longitudinal Sample of Adolescents

    ERIC Educational Resources Information Center

    Kofler, Michael J.; McCart, Michael R.; Zajac, Kristyn; Ruggiero, Kenneth J.; Saunders, Benjamin E.; Kilpatrick, Dean G.

    2011-01-01

    Objectives: The current study tested opposing predictions stemming from the failure and acting out theories of depression-delinquency covariation. Method: Participants included a nationwide longitudinal sample of adolescents (N = 3,604) ages 12 to 17. Competing models were tested with cohort-sequential latent growth curve modeling to determine…

  6. Simultaneous determination of mequindox, quinocetone, and their major metabolites in chicken and pork by UPLC-MS/MS

    USDA-ARS?s Scientific Manuscript database

    This research presents a sensitive and confirmatory multi-residue method for mequindox (MEQ), quinocetone (QCT), and their 11 metabolites in chicken and pork samples. After extracted with acetonitrile-ethyl acetate, acidulated, and extracted again with ethyl acetate sequentially, each sample was pu...

  7. Thermogravimetric and differential thermal analysis of potassium bicarbonate contaminated cellulose

    Treesearch

    A. Broido

    1966-01-01

    When samples undergo a complicated set of simultaneous and sequential reactions, as cellulose does on heating, results of thermogravimetric and differential thermal analyses are difficult to interpret. Nevertheless, careful comparison of pure and contaminated samples, pyrolyzed under identical conditions, can yield useful information. In these experiments TGA and DTA...

  8. ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, PCB DETECTION TECHNOLOGY, HYBRIZYME DELFIA TM ASSAY

    EPA Science Inventory

    The DELFIA PCB Assay is a solid-phase time-resolved fluoroimmunoassay based on the sequential addition of sample extract and europium-labeled PCB tracer to a monoclonal antibody reagent specific for PCBs. In this assay, the antibody reagent and sample extract are added to a strip...

  9. Semiautomatic sequential extraction of polycyclic aromatic hydrocarbons and elemental bio-accessible fraction by accelerated solvent extraction on a single particulate matter sample.

    PubMed

    Astolfi, Maria Luisa; Di Filippo, Patrizia; Gentili, Alessandra; Canepari, Silvia

    2017-11-01

    We describe the optimization and validation of a sequential extractive method for the determination of the polycyclic aromatic hydrocarbons (PAHs) and elements (Al, As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, Se, V and Zn) that are chemically fractionated into bio-accessible and mineralized residual fractions on a single particulate matter filter. The extraction is performed by automatic accelerated solvent extraction (ASE); samples are sequentially treated with dichloromethane/acetone (4:1) for PAHs extraction and acetate buffer (0.01M; pH 4.5) for elements extraction (bio-accessible fraction). The remaining solid sample is then collected and subjected to acid digestion with HNO 3 :H 2 O 2 (2:1) to determine the mineralized residual element fraction. We also describe a homemade ASE cell that reduces the blank values for most elements; in this cell, the steel frit was replaced by a Teflon pierced disk and a Teflon cylinder was used as the filler. The performance of the proposed method was evaluated in terms of recovery from standard reference material (SRM 1648 and SRM 1649a) and repeatability. The equivalence between the new ASE method and conventional methods was verified for PAHs and for bio-accessible and mineralized residual fractions of elements on PM 10 twin filters. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Hoping for More: The Influence of Outcome Desirability on Information Seeking and Predictions about Relative Quantities

    ERIC Educational Resources Information Center

    Scherer, Aaron M.; Windschitl, Paul D.; O'Rourke, Jillian; Smith, Andrew R.

    2012-01-01

    People must often engage in sequential sampling in order to make predictions about the relative quantities of two options. We investigated how directional motives influence sampling selections and resulting predictions in such cases. We used a paradigm in which participants had limited time to sample items and make predictions about which side of…

  11. Mercury in Environmental and Biological Samples Using Online Combustion with Sequential Atomic Absorption and Fluorescence Measurements: A Direct Comparison of Two Fundamental Techniques in Spectrometry

    ERIC Educational Resources Information Center

    Cizdziel, James V.

    2011-01-01

    In this laboratory experiment, students quantitatively determine the concentration of an element (mercury) in an environmental or biological sample while comparing and contrasting the fundamental techniques of atomic absorption spectrometry (AAS) and atomic fluorescence spectrometry (AFS). A mercury analyzer based on sample combustion,…

  12. SEDPAK—A comprehensive operational system and data-processing package in APPLESOFT BASIC for a settling tube, sediment analyzer

    NASA Astrophysics Data System (ADS)

    Goldbery, R.; Tehori, O.

    SEDPAK provides a comprehensive software package for operation of a settling tube and sand analyzer (2-0.063 mm) and includes data-processing programs for statistical and graphic output of results. The programs are menu-driven and written in APPLESOFT BASIC, conforming with APPLE 3.3 DOS. Data storage and retrieval from disc is an important feature of SEDPAK. Additional features of SEDPAK include condensation of raw settling data via standard size-calibration curves to yield statistical grain-size parameters, plots of grain-size frequency distributions and cumulative log/probability curves. The program also has a module for processing of grain-size frequency data from sieved samples. An addition feature of SEDPAK is the option for automatic data processing and graphic output of a sequential or nonsequential array of samples on one side of a disc.

  13. Development of sampling plans for cotton bolls injured by stink bugs (Hemiptera: Pentatomidae).

    PubMed

    Reay-Jones, F P F; Toews, M D; Greene, J K; Reeves, R B

    2010-04-01

    Cotton, Gossypium hirsutum L., bolls were sampled in commercial fields for stink bug (Hemiptera: Pentatomidae) injury during 2007 and 2008 in South Carolina and Georgia. Across both years of this study, boll-injury percentages averaged 14.8 +/- 0.3 (SEM). At average boll injury treatment levels of 10, 20, 30, and 50%, the percentage of samples with at least one injured boll was 82, 97, 100, and 100%, respectively. Percentage of field-sampling date combinations with average injury < 10, 20, 30, and 50% was 35, 80, 95, and 99%, respectively. At the average of 14.8% boll injury or 2.9 injured bolls per 20-boll sample, 112 samples at Dx = 0.1 (within 10% of the mean) were required for population estimation, compared with only 15 samples at Dx = 0.3. Using a sample size of 20 bolls, our study indicated that, at the 10% threshold and alpha = beta = 0.2 (with 80% confidence), control was not needed when <1.03 bolls were injured. The sampling plan required continued sampling for a range of 1.03-3.8 injured bolls per 20-boll sample. Only when injury was > 3.8 injured bolls per 20-boll sample was a control measure needed. Sequential sampling plans were also determined for thresholds of 20, 30, and 50% injured bolls. Sample sizes for sequential sampling plans were significantly reduced when compared with a fixed sampling plan (n=10) for all thresholds and error rates.

  14. Sequential Probability Ratio Testing with Power Projective Base Method Improves Decision-Making for BCI

    PubMed Central

    Liu, Rong

    2017-01-01

    Obtaining a fast and reliable decision is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this study, the EEG signals were firstly analyzed with a power projective base method. Then we were applied a decision-making model, the sequential probability ratio testing (SPRT), for single-trial classification of motor imagery movement events. The unique strength of this proposed classification method lies in its accumulative process, which increases the discriminative power as more and more evidence is observed over time. The properties of the method were illustrated on thirteen subjects' recordings from three datasets. Results showed that our proposed power projective method outperformed two benchmark methods for every subject. Moreover, with sequential classifier, the accuracies across subjects were significantly higher than that with nonsequential ones. The average maximum accuracy of the SPRT method was 84.1%, as compared with 82.3% accuracy for the sequential Bayesian (SB) method. The proposed SPRT method provides an explicit relationship between stopping time, thresholds, and error, which is important for balancing the time-accuracy trade-off. These results suggest SPRT would be useful in speeding up decision-making while trading off errors in BCI. PMID:29348781

  15. Perceptual Grouping Affects Pitch Judgments Across Time and Frequency

    PubMed Central

    Borchert, Elizabeth M. O.; Micheyl, Christophe; Oxenham, Andrew J.

    2010-01-01

    Pitch, the perceptual correlate of fundamental frequency (F0), plays an important role in speech, music and animal vocalizations. Changes in F0 over time help define musical melodies and speech prosody, while comparisons of simultaneous F0 are important for musical harmony, and for segregating competing sound sources. This study compared listeners’ ability to detect differences in F0 between pairs of sequential or simultaneous tones that were filtered into separate, non-overlapping spectral regions. The timbre differences induced by filtering led to poor F0 discrimination in the sequential, but not the simultaneous, conditions. Temporal overlap of the two tones was not sufficient to produce good performance; instead performance appeared to depend on the two tones being integrated into the same perceptual object. The results confirm the difficulty of comparing the pitches of sequential sounds with different timbres and suggest that, for simultaneous sounds, pitch differences may be detected through a decrease in perceptual fusion rather than an explicit coding and comparison of the underlying F0s. PMID:21077719

  16. Do statistical segmentation abilities predict lexical-phonological and lexical-semantic abilities in children with and without SLI?

    PubMed Central

    Mainela-Arnold, Elina; Evans, Julia L.

    2014-01-01

    This study tested the predictions of the procedural deficit hypothesis by investigating the relationship between sequential statistical learning and two aspects of lexical ability, lexical-phonological and lexical-semantic, in children with and without specific language impairment (SLI). Participants included 40 children (ages 8;5–12;3), 20 children with SLI and 20 with typical development. Children completed Saffran’s statistical word segmentation task, a lexical-phonological access task (gating task), and a word definition task. Poor statistical learners were also poor at managing lexical-phonological competition during the gating task. However, statistical learning was not a significant predictor of semantic richness in word definitions. The ability to track statistical sequential regularities may be important for learning the inherently sequential structure of lexical-phonology, but not as important for learning lexical-semantic knowledge. Consistent with the procedural/declarative memory distinction, the brain networks associated with the two types of lexical learning are likely to have different learning properties. PMID:23425593

  17. Simultaneous sequential monitoring of efficacy and safety led to masking of effects.

    PubMed

    van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg

    2016-08-01

    Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Effective Sequential Classifier Training for SVM-Based Multitemporal Remote Sensing Image Classification

    NASA Astrophysics Data System (ADS)

    Guo, Yiqing; Jia, Xiuping; Paull, David

    2018-06-01

    The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.

  19. EEG Classification with a Sequential Decision-Making Method in Motor Imagery BCI.

    PubMed

    Liu, Rong; Wang, Yongxuan; Newman, Geoffrey I; Thakor, Nitish V; Ying, Sarah

    2017-12-01

    To develop subject-specific classifier to recognize mental states fast and reliably is an important issue in brain-computer interfaces (BCI), particularly in practical real-time applications such as wheelchair or neuroprosthetic control. In this paper, a sequential decision-making strategy is explored in conjunction with an optimal wavelet analysis for EEG classification. The subject-specific wavelet parameters based on a grid-search method were first developed to determine evidence accumulative curve for the sequential classifier. Then we proposed a new method to set the two constrained thresholds in the sequential probability ratio test (SPRT) based on the cumulative curve and a desired expected stopping time. As a result, it balanced the decision time of each class, and we term it balanced threshold SPRT (BTSPRT). The properties of the method were illustrated on 14 subjects' recordings from offline and online tests. Results showed the average maximum accuracy of the proposed method to be 83.4% and the average decision time of 2.77[Formula: see text]s, when compared with 79.2% accuracy and a decision time of 3.01[Formula: see text]s for the sequential Bayesian (SB) method. The BTSPRT method not only improves the classification accuracy and decision speed comparing with the other nonsequential or SB methods, but also provides an explicit relationship between stopping time, thresholds and error, which is important for balancing the speed-accuracy tradeoff. These results suggest that BTSPRT would be useful in explicitly adjusting the tradeoff between rapid decision-making and error-free device control.

  20. Long-Term Impacts Induced by Disposal of Contaminated River Sediments in Elliott Bay, Seattle, Washington

    DTIC Science & Technology

    1984-09-01

    and these accompanied the sample residue through sieving to avoid sample mix- up . B. Field data sheets required logger’s initials on each page to A...ensure data completeness. C. Metal trays were placed to catch residue spillage during residue transfer from sieves to sample bottles. D. Sample bottles...methodologies were comparable for all sample e types and consisted of four sequential components: extraction, clean- up , gas chromatographic (GC) analysis, and

  1. Sequential analysis in neonatal research-systematic review.

    PubMed

    Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne

    2018-05-01

    As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).

  2. Movement of particles using sequentially activated dielectrophoretic particle trapping

    DOEpatents

    Miles, Robin R.

    2004-02-03

    Manipulation of DNA and cells/spores using dielectrophoretic (DEP) forces to perform sample preparation protocols for polymerized chain reaction (PCR) based assays for various applications. This is accomplished by movement of particles using sequentially activated dielectrophoretic particle trapping. DEP forces induce a dipole in particles, and these particles can be trapped in non-uniform fields. The particles can be trapped in the high field strength region of one set of electrodes. By switching off this field and switching on an adjacent electrodes, particles can be moved down a channel with little or no flow.

  3. Prevalence of urinary tract infection (UTI) in sequential acutely unwell children presenting in primary care: exploratory study.

    PubMed

    O'Brien, Kathryn; Stanton, Naomi; Edwards, Adrian; Hood, Kerenza; Butler, Christopher C

    2011-03-01

    Due to the non-specific nature of symptoms of UTI in children and low levels of urine sampling, the prevalence of UTI amongst acutely ill children in primary care is unknown. To undertake an exploratory study of acutely ill children consulting in primary care, determine the feasibility of obtaining urine samples, and describe presenting symptoms and signs, and the proportion with UTI. Exploratory, observational study. Four general practices in South Wales. A total of 99 sequential attendees with acute illness aged less than five years. UTI defined by >10(5) organisms/ml on laboratory culture of urine. Urine samples were obtained in 75 (76%) children. Three (4%) met microbiological criteria for UTI. GPs indicated they would not normally have obtained urine samples in any of these three children. However, all had received antibiotics for suspected alternative infections. Urine sample collection is feasible from the majority of acutely ill children in primary care, including infants. Some cases of UTI may be missed if children thought to have an alternative site of infection are excluded from urine sampling. A larger study is needed to more accurately determine the prevalence of UTI in children consulting with acute illness in primary care, and to explore which symptoms and signs might help clinicians effectively target urine sampling.

  4. Method and apparatus for telemetry adaptive bandwidth compression

    NASA Technical Reports Server (NTRS)

    Graham, Olin L.

    1987-01-01

    Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.

  5. Associations among measures of sequential processing in motor and linguistics tasks in adults with and without a family history of childhood apraxia of speech: a replication study.

    PubMed

    Button, Le; Peter, Beate; Stoel-Gammon, Carol; Raskind, Wendy H

    2013-03-01

    The purpose of this study was to address the hypothesis that childhood apraxia of speech (CAS) is influenced by an underlying deficit in sequential processing that is also expressed in other modalities. In a sample of 21 adults from five multigenerational families, 11 with histories of various familial speech sound disorders, 3 biologically related adults from a family with familial CAS showed motor sequencing deficits in an alternating motor speech task. Compared with the other adults, these three participants showed deficits in tasks requiring high loads of sequential processing, including nonword imitation, nonword reading and spelling. Qualitative error analyses in real word and nonword imitations revealed group differences in phoneme sequencing errors. Motor sequencing ability was correlated with phoneme sequencing errors during real word and nonword imitation, reading and spelling. Correlations were characterized by extremely high scores in one family and extremely low scores in another. Results are consistent with a central deficit in sequential processing in CAS of familial origin.

  6. Devaluation and sequential decisions: linking goal-directed and model-based behavior

    PubMed Central

    Friedel, Eva; Koch, Stefan P.; Wendt, Jean; Heinz, Andreas; Deserno, Lorenz; Schlagenhauf, Florian

    2014-01-01

    In experimental psychology different experiments have been developed to assess goal–directed as compared to habitual control over instrumental decisions. Similar to animal studies selective devaluation procedures have been used. More recently sequential decision-making tasks have been designed to assess the degree of goal-directed vs. habitual choice behavior in terms of an influential computational theory of model-based compared to model-free behavioral control. As recently suggested, different measurements are thought to reflect the same construct. Yet, there has been no attempt to directly assess the construct validity of these different measurements. In the present study, we used a devaluation paradigm and a sequential decision-making task to address this question of construct validity in a sample of 18 healthy male human participants. Correlational analysis revealed a positive association between model-based choices during sequential decisions and goal-directed behavior after devaluation suggesting a single framework underlying both operationalizations and speaking in favor of construct validity of both measurement approaches. Up to now, this has been merely assumed but never been directly tested in humans. PMID:25136310

  7. The parallel-sequential field subtraction techniques for nonlinear ultrasonic imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-04-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage and have sensitivity to particularly closed defects. This study utilizes two modes of focusing: parallel, in which the elements are fired together with a delay law, and sequential, in which elements are fired independently. In the parallel focusing, a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded; with elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images formed from the coherent component of the field and use this to characterize nonlinearity of closed fatigue cracks. In particular we monitor the reduction in amplitude at the fundamental frequency at each focal point and use this metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g., back wall or large scatters) and allow damage to be detected at an early stage.

  8. Judgments relative to patterns: how temporal sequence patterns affect judgments and memory.

    PubMed

    Kusev, Petko; Ayton, Peter; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Stewart, Neil; Chater, Nick

    2011-12-01

    Six experiments studied relative frequency judgment and recall of sequentially presented items drawn from 2 distinct categories (i.e., city and animal). The experiments show that judged frequencies of categories of sequentially encountered stimuli are affected by certain properties of the sequence configuration. We found (a) a first-run effect whereby people overestimated the frequency of a given category when that category was the first repeated category to occur in the sequence and (b) a dissociation between judgments and recall; respondents may judge 1 event more likely than the other and yet recall more instances of the latter. Specifically, the distribution of recalled items does not correspond to the frequency estimates for the event categories, indicating that participants do not make frequency judgments by sampling their memory for individual items as implied by other accounts such as the availability heuristic (Tversky & Kahneman, 1973) and the availability process model (Hastie & Park, 1986). We interpret these findings as reflecting the operation of a judgment heuristic sensitive to sequential patterns and offer an account for the relationship between memory and judged frequencies of sequentially encountered stimuli.

  9. Associations among measures of sequential processing in motor and linguistics tasks in adults with and without a family history of childhood apraxia of speech: A replication study

    PubMed Central

    BUTTON, LE; PETER, BEATE; STOEL-GAMMON, CAROL; RASKIND, WENDY H.

    2013-01-01

    The purpose of this study was to address the hypothesis that childhood apraxia of speech (CAS) is influenced by an underlying deficit in sequential processing that is also expressed in other modalities. In a sample of 21 adults from five multigenerational families, 11 with histories of various familial speech sound disorders, 3 biologically related adults from a family with familial CAS showed motor sequencing deficits in an alternating motor speech task. Compared with the other adults, these three participants showed deficits in tasks requiring high loads of sequential processing, including nonword imitation, nonword reading and spelling. Qualitative error analyses in real word and nonword imitations revealed group differences in phoneme sequencing errors. Motor sequencing ability was correlated with phoneme sequencing errors during real word and nonword imitation, reading and spelling. Correlations were characterized by extremely high scores in one family and extremely low scores in another. Results are consistent with a central deficit in sequential processing in CAS of familial origin. PMID:23339292

  10. The Domino Way to Heterocycles

    PubMed Central

    Padwa, Albert; Bur, Scott K.

    2007-01-01

    Sequential transformations enable the facile synthesis of complex target molecules from simple building blocks in a single preparative step. Their value is amplified if they also create multiple stereogenic centers. In the ongoing search for new domino processes, emphasis is usually placed on sequential reactions which occur cleanly and without forming by-products. As a prerequisite for an ideally proceeding one-pot sequential transformation, the reactivity pattern of all participating components has to be such that each building block gets involved in a reaction only when it is supposed to do so. The development of sequences that combine transformations of fundamentally different mechanisms broadens the scope of such procedures in synthetic chemistry. This mini review contains a representative sampling from the last 15 years on the kinds of reactions that have been sequenced into cascades to produce heterocyclic molecules. PMID:17940591

  11. Rapid Decisions From Experience

    PubMed Central

    Zeigenfuse, Matthew D.; Pleskac, Timothy J.; Liu, Taosheng

    2014-01-01

    In many everyday decisions, people quickly integrate noisy samples of information to form a preference among alternatives that offer uncertain rewards. Here, we investigated this decision process using the Flash Gambling Task (FGT), in which participants made a series of choices between a certain payoff and an uncertain alternative that produced a normal distribution of payoffs. For each choice, participants experienced the distribution of payoffs via rapid samples updated every 50 ms. We show that people can make these rapid decisions from experience and that the decision process is consistent with a sequential sampling process. Results also reveal a dissociation between these preferential decisions and equivalent perceptual decisions where participants had to determine which alternatives contained more dots on average. To account for this dissociation, we developed a sequential sampling rank-dependent utility model, which showed that participants in the FGT attended more to larger potential payoffs than participants in the perceptual task despite being given equivalent information. We discuss the implications of these findings in terms of computational models of preferential choice and a more complete understanding of experience-based decision making. PMID:24549141

  12. Co-expression of HoxA9 and bcr-abl genes in chronic myeloid leukemia.

    PubMed

    Tedeschi, Fabián A; Cardozo, Maria A; Valentini, Rosanna; Zalazar, Fabián E

    2010-05-01

    We have analyzed the co-expression of the bcr-abl and HoxA9 genes in the follow-up of patients with chronic myeloid leukemia (CML). In the present work we measured the HoxA9 and bcr-abl gene expression in sequential samples. In all patients, bcr-abl and HoxA9 were expressed at detectable levels in every sample. When the results were expressed in relation to abl, two different situations were found: (a) patients clinically stable at second sampling, with low relative risk at diagnosis (low Sokal's score), did not show significant differences in both bcr-abl and HoxA9 levels in the sequential samples analyzed, and (b) patients with poor prognosis (showing intermediate or high Sokal's score at diagnosis) had increased expression of bcr-abl as well as HoxA9 genes (p < 0.05). Since HoxA9 gene expression remains at relatively constant levels throughout adult life, our results could reflect actual changes in the expression rate of this gene associated with bcr-abl during the progression of CML.

  13. Deferiprone, a non-toxic reagent for determination of iron in samples via sequential injection analysis

    NASA Astrophysics Data System (ADS)

    Pragourpun, Kraivinee; Sakee, Uthai; Fernandez, Carlos; Kruanetr, Senee

    2015-05-01

    We present for the first time the use of deferiprone as a non-toxic complexing agent for the determination of iron by sequential injection analysis in pharmaceuticals and food samples. The method was based on the reaction of Fe(III) and deferiprone in phosphate buffer at pH 7.5 to give a Fe(III)-deferiprone complex, which showed a maximum absorption at 460 nm. Under the optimum conditions, the linearity range for iron determination was found over the range of 0.05-3.0 μg mL-1 with a correlation coefficient (r2) of 0.9993. The limit of detection and limit of quantitation were 0.032 μg mL-1 and 0.055 μg mL-1, respectively. The relative standard deviation (%RSD) of the method was less than 5.0% (n = 11), and the percentage recovery was found in the range of 96.0-104.0%. The proposed method was satisfactorily applied for the determination of Fe(III) in pharmaceuticals, water and food samples with a sampling rate of 60 h-1.

  14. Detailed imaging and genetic analysis reveal a secondary BRAF(L505H) resistance mutation and extensive intrapatient heterogeneity in metastatic BRAF mutant melanoma patients treated with vemurafenib.

    PubMed

    Hoogstraat, Marlous; Gadellaa-van Hooijdonk, Christa G; Ubink, Inge; Besselink, Nicolle J M; Pieterse, Mark; Veldhuis, Wouter; van Stralen, Marijn; Meijer, Eelco F J; Willems, Stefan M; Hadders, Michael A; Kuilman, Thomas; Krijgsman, Oscar; Peeper, Daniel S; Koudijs, Marco J; Cuppen, Edwin; Voest, Emile E; Lolkema, Martijn P

    2015-05-01

    Resistance to treatment is the main problem of targeted treatment for cancer. We followed ten patients during treatment with vemurafenib, by three-dimensional imaging. In all patients, only a subset of lesions progressed. Next-generation DNA sequencing was performed on sequential biopsies in four patients to uncover mechanisms of resistance. In two patients, we identified mutations that explained resistance to vemurafenib; one of these patients had a secondary BRAF L505H mutation. This is the first observation of a secondary BRAF mutation in a vemurafenib-resistant patient-derived melanoma sample, which confirms the potential importance of the BRAF L505H mutation in the development of therapy resistance. Moreover, this study hints toward an important role for tumor heterogeneity in determining the outcome of targeted treatments. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Spectrophotometric determination of sulphate in automotive fuel ethanol by sequential injection analysis using dimethylsulphonazo(III) reaction.

    PubMed

    de Oliveira, Fabio Santos; Korn, Mauro

    2006-01-15

    A sensitive SIA method was developed for sulphate determination in automotive fuel ethanol. This method was based on the reaction of sulphate with barium-dimethylsulphonazo(III) leading to a decrease on the magnitude of analytical signal monitored at 665 nm. Alcohol fuel samples were previously burned up to avoid matrix effects for sulphate determinations. Binary sampling and stop-flow strategies were used to increase the sensitivity of the method. The optimization of analytical parameter was performed by response surface method using Box-Behnker and central composite designs. The proposed sequential flow procedure permits to determine up to 10.0mg SO(4)(2-)l(-1) with R.S.D. <2.5% and limit of detection of 0.27 mg l(-1). The method has been successfully applied for sulphate determination in automotive fuel alcohol and the results agreed with the reference volumetric method. In the optimized condition the SIA system carried out 27 samples per hour.

  16. Effects of neostriatal 6-OHDA lesion on performance in a rat sequential reaction time task.

    PubMed

    Domenger, D; Schwarting, R K W

    2008-10-31

    Work in humans and monkeys has provided evidence that the basal ganglia, and the neurotransmitter dopamine therein, play an important role for sequential learning and performance. Compared to primates, experimental work in rodents is rather sparse, largely due to the fact that tasks comparable to the human ones, especially serial reaction time tasks (SRTT), had been lacking until recently. We have developed a rat model of the SRTT, which allows to study neural correlates of sequential performance and motor sequence execution. Here, we report the effects of dopaminergic neostriatal lesions, performed using bilateral 6-hydroxydopamine injections, on performance of well-trained rats tested in our SRTT. Sequential behavior was measured in two ways: for one, the effects of small violations of otherwise well trained sequences were examined as a measure of attention and automation. Secondly, sequential versus random performance was compared as a measure of sequential learning. Neurochemically, the lesions led to sub-total dopamine depletions in the neostriatum, which ranged around 60% in the lateral, and around 40% in the medial neostriatum. These lesions led to a general instrumental impairment in terms of reduced speed (response latencies) and response rate, and these deficits were correlated with the degree of striatal dopamine loss. Furthermore, the violation test indicated that the lesion group conducted less automated responses. The comparison of random versus sequential responding showed that the lesion group did not retain its superior sequential performance in terms of speed, whereas they did in terms of accuracy. Also, rats with lesions did not improve further in overall performance as compared to pre-lesion values, whereas controls did. These results support previous results that neostriatal dopamine is involved in instrumental behaviour in general. Also, these lesions are not sufficient to completely abolish sequential performance, at least when acquired before lesion as tested here.

  17. Plasmon-driven sequential chemical reactions in an aqueous environment.

    PubMed

    Zhang, Xin; Wang, Peijie; Zhang, Zhenglong; Fang, Yurui; Sun, Mengtao

    2014-06-24

    Plasmon-driven sequential chemical reactions were successfully realized in an aqueous environment. In an electrochemical environment, sequential chemical reactions were driven by an applied potential and laser irradiation. Furthermore, the rate of the chemical reaction was controlled via pH, which provides indirect evidence that the hot electrons generated from plasmon decay play an important role in plasmon-driven chemical reactions. In acidic conditions, the hot electrons were captured by the abundant H(+) in the aqueous environment, which prevented the chemical reaction. The developed plasmon-driven chemical reactions in an aqueous environment will significantly expand the applications of plasmon chemistry and may provide a promising avenue for green chemistry using plasmon catalysis in aqueous environments under irradiation by sunlight.

  18. Plasmon-driven sequential chemical reactions in an aqueous environment

    PubMed Central

    Zhang, Xin; Wang, Peijie; Zhang, Zhenglong; Fang, Yurui; Sun, Mengtao

    2014-01-01

    Plasmon-driven sequential chemical reactions were successfully realized in an aqueous environment. In an electrochemical environment, sequential chemical reactions were driven by an applied potential and laser irradiation. Furthermore, the rate of the chemical reaction was controlled via pH, which provides indirect evidence that the hot electrons generated from plasmon decay play an important role in plasmon-driven chemical reactions. In acidic conditions, the hot electrons were captured by the abundant H+ in the aqueous environment, which prevented the chemical reaction. The developed plasmon-driven chemical reactions in an aqueous environment will significantly expand the applications of plasmon chemistry and may provide a promising avenue for green chemistry using plasmon catalysis in aqueous environments under irradiation by sunlight. PMID:24958029

  19. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE PAGES

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    2017-01-09

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  20. Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.

    We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less

  1. IgG and IgM anti-snRNP reactivity in sequentially obtained serum samples from patients with connective tissue diseases.

    PubMed Central

    Nyman, U; Lundberg, I; Hedfors, E; Wahren, M; Pettersson, I

    1992-01-01

    Sequentially obtained serum samples from 30 patients with connective tissue disease positive for antibody to ribonucleoprotein (RNP) were examined to determine the specificities of IgG and IgM antibodies to snRNP during the disease course using immunoblotting of nuclear extracts. The antibody patterns were correlated with disease activity. The patterns of antibody to snRNP of individual patients were mainly stable during the study but changes in levels of antibody to snRNP were seen corresponding to changes in clinical activity. These results indicate that increased reactivity of serum IgM antibodies against the B/B' proteins seems to precede a clinically evident exacerbation of disease whereas IgG antibody reactivity to the 70 K protein peaks at the time of a disease flare. Images PMID:1485812

  2. Exploiting an automated microfluidic hydrodynamic sequential injection system for determination of phosphate.

    PubMed

    Khongpet, Wanpen; Pencharee, Somkid; Puangpila, Chanida; Kradtap Hartwell, Supaporn; Lapanantnoppakhun, Somchai; Jakmunee, Jaroon

    2018-01-15

    A microfluidic hydrodynamic sequential injection (μHSI) spectrophotometric system was designed and fabricated. The system was built by laser engraving a manifold pattern on an acrylic block and sealing with another flat acrylic plate to form a microfluidic channel platform. The platform was incorporated with small solenoid valves to obtain a portable setup for programmable control of the liquid flow into the channel according to the HSI principle. The system was demonstrated for the determination of phosphate using a molybdenum blue method. An ascorbic acid, standard or sample, and acidic molybdate solutions were sequentially aspirated to fill the channel forming a stack zone before flowing to the detector. Under the optimum condition, a linear calibration graph in the range of 0.1-6mg P L -1 was obtained. The detection limit was 0.1mgL -1 . The system is compact (5.0mm thick, 80mm wide × 140mm long), durable, portable, cost-effective, and consumes little amount of chemicals (83μL each of molybdate and ascorbic acid, 133μL of the sample solution and 1.7mL of water carrier/run). It was applied for the determination of phosphate content in extracted soil samples. The percent recoveries of the analysis were obtained in the range of 91.2-107.3. The results obtained agreed well with those of the batch spectrophotometric method. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Systematic evaluation of sequential geostatistical resampling within MCMC for posterior sampling of near-surface geophysical inverse problems

    NASA Astrophysics Data System (ADS)

    Ruggeri, Paolo; Irving, James; Holliger, Klaus

    2015-08-01

    We critically examine the performance of sequential geostatistical resampling (SGR) as a model proposal mechanism for Bayesian Markov-chain-Monte-Carlo (MCMC) solutions to near-surface geophysical inverse problems. Focusing on a series of simple yet realistic synthetic crosshole georadar tomographic examples characterized by different numbers of data, levels of data error and degrees of model parameter spatial correlation, we investigate the efficiency of three different resampling strategies with regard to their ability to generate statistically independent realizations from the Bayesian posterior distribution. Quite importantly, our results show that, no matter what resampling strategy is employed, many of the examined test cases require an unreasonably high number of forward model runs to produce independent posterior samples, meaning that the SGR approach as currently implemented will not be computationally feasible for a wide range of problems. Although use of a novel gradual-deformation-based proposal method can help to alleviate these issues, it does not offer a full solution. Further, we find that the nature of the SGR is found to strongly influence MCMC performance; however no clear rule exists as to what set of inversion parameters and/or overall proposal acceptance rate will allow for the most efficient implementation. We conclude that although the SGR methodology is highly attractive as it allows for the consideration of complex geostatistical priors as well as conditioning to hard and soft data, further developments are necessary in the context of novel or hybrid MCMC approaches for it to be considered generally suitable for near-surface geophysical inversions.

  4. Introduction to the DISRUPT postprandial database: subjects, studies and methodologies.

    PubMed

    Jackson, Kim G; Clarke, Dave T; Murray, Peter; Lovegrove, Julie A; O'Malley, Brendan; Minihane, Anne M; Williams, Christine M

    2010-03-01

    Dysregulation of lipid and glucose metabolism in the postprandial state are recognised as important risk factors for the development of cardiovascular disease and type 2 diabetes. Our objective was to create a comprehensive, standardised database of postprandial studies to provide insights into the physiological factors that influence postprandial lipid and glucose responses. Data were collated from subjects (n = 467) taking part in single and sequential meal postprandial studies conducted by researchers at the University of Reading, to form the DISRUPT (DIetary Studies: Reading Unilever Postprandial Trials) database. Subject attributes including age, gender, genotype, menopausal status, body mass index, blood pressure and a fasting biochemical profile, together with postprandial measurements of triacylglycerol (TAG), non-esterified fatty acids, glucose, insulin and TAG-rich lipoprotein composition are recorded. A particular strength of the studies is the frequency of blood sampling, with on average 10-13 blood samples taken during each postprandial assessment, and the fact that identical test meal protocols were used in a number of studies, allowing pooling of data to increase statistical power. The DISRUPT database is the most comprehensive postprandial metabolism database that exists worldwide and preliminary analysis of the pooled sequential meal postprandial dataset has revealed both confirmatory and novel observations with respect to the impact of gender and age on the postprandial TAG response. Further analysis of the dataset using conventional statistical techniques along with integrated mathematical models and clustering analysis will provide a unique opportunity to greatly expand current knowledge of the aetiology of inter-individual variability in postprandial lipid and glucose responses.

  5. The use of main concept analysis to measure discourse production in Cantonese-speaking persons with aphasia: a preliminary report.

    PubMed

    Kong, Anthony Pak-Hin

    2009-01-01

    Discourse produced by speakers with aphasia contains rich and valuable information for researchers to understand the manifestation of aphasia as well as for clinicians to plan specific treatment components for their clients. Various approaches to investigate aphasic discourse have been proposed in the English literature. However, this is not the case in Chinese. As a result, clinical evaluations of aphasic discourse have not been a common practice. This problem is further compounded by the lack of validated stimuli that are culturally appropriate for language elicitation. The purpose of this study was twofold: (a) to develop and validate four sequential pictorial stimuli for elicitation of language samples in Cantonese speakers with aphasia, and (b) to investigate the use of a main concept measurement, a clinically oriented quantitative system, to analyze the elicited language samples. Twenty speakers with aphasia and ten normal speakers were invited to participate in this study. The aphasic group produced significantly less key information than the normal group. More importantly, a strong relationship was also found between aphasia severity and production of main concepts. While the results of the inter-rater and intra-rater reliability suggested the scoring system to be reliable, the test-retest results yielded strong and significant correlations across two testing sessions one to three weeks apart. Readers will demonstrate better understanding of (1) the development and validation of newly devised sequential pictorial stimuli to elicit oral language production, and (2) the use of a main concept measurement to quantify aphasic connected speech in Cantonese Chinese.

  6. An exact computational method for performance analysis of sequential test algorithms for detecting network intrusions

    NASA Astrophysics Data System (ADS)

    Chen, Xinjia; Lacy, Fred; Carriere, Patrick

    2015-05-01

    Sequential test algorithms are playing increasingly important roles for quick detecting network intrusions such as portscanners. In view of the fact that such algorithms are usually analyzed based on intuitive approximation or asymptotic analysis, we develop an exact computational method for the performance analysis of such algorithms. Our method can be used to calculate the probability of false alarm and average detection time up to arbitrarily pre-specified accuracy.

  7. The timing of language learning shapes brain structure associated with articulation.

    PubMed

    Berken, Jonathan A; Gracco, Vincent L; Chen, Jen-Kai; Klein, Denise

    2016-09-01

    We compared the brain structure of highly proficient simultaneous (two languages from birth) and sequential (second language after age 5) bilinguals, who differed only in their degree of native-like accent, to determine how the brain develops when a skill is acquired from birth versus later in life. For the simultaneous bilinguals, gray matter density was increased in the left putamen, as well as in the left posterior insula, right dorsolateral prefrontal cortex, and left and right occipital cortex. For the sequential bilinguals, gray matter density was increased in the bilateral premotor cortex. Sequential bilinguals with better accents also showed greater gray matter density in the left putamen, and in several additional brain regions important for sensorimotor integration and speech-motor control. Our findings suggest that second language learning results in enhanced brain structure of specific brain areas, which depends on whether two languages are learned simultaneously or sequentially, and on the extent to which native-like proficiency is acquired.

  8. Self-regulated learning of important information under sequential and simultaneous encoding conditions.

    PubMed

    Middlebrooks, Catherine D; Castel, Alan D

    2018-05-01

    Learners make a number of decisions when attempting to study efficiently: they must choose which information to study, for how long to study it, and whether to restudy it later. The current experiments examine whether documented impairments to self-regulated learning when studying information sequentially, as opposed to simultaneously, extend to the learning of and memory for valuable information. In Experiment 1, participants studied lists of words ranging in value from 1-10 points sequentially or simultaneously at a preset presentation rate; in Experiment 2, study was self-paced and participants could choose to restudy. Although participants prioritized high-value over low-value information, irrespective of presentation, those who studied the items simultaneously demonstrated superior value-based prioritization with respect to recall, study selections, and self-pacing. The results of the present experiments support the theory that devising, maintaining, and executing efficient study agendas is inherently different under sequential formatting than simultaneous. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Sequential inference as a mode of cognition and its correlates in fronto-parietal and hippocampal brain regions

    PubMed Central

    Friston, Karl J.; Dolan, Raymond J.

    2017-01-01

    Normative models of human cognition often appeal to Bayesian filtering, which provides optimal online estimates of unknown or hidden states of the world, based on previous observations. However, in many cases it is necessary to optimise beliefs about sequences of states rather than just the current state. Importantly, Bayesian filtering and sequential inference strategies make different predictions about beliefs and subsequent choices, rendering them behaviourally dissociable. Taking data from a probabilistic reversal task we show that subjects’ choices provide strong evidence that they are representing short sequences of states. Between-subject measures of this implicit sequential inference strategy had a neurobiological underpinning and correlated with grey matter density in prefrontal and parietal cortex, as well as the hippocampus. Our findings provide, to our knowledge, the first evidence for sequential inference in human cognition, and by exploiting between-subject variation in this measure we provide pointers to its neuronal substrates. PMID:28486504

  10. Cross-Sectional Analysis of Time-Dependent Data: Mean-Induced Association in Age-Heterogeneous Samples and an Alternative Method Based on Sequential Narrow Age-Cohort Samples

    ERIC Educational Resources Information Center

    Hofer, Scott M.; Flaherty, Brian P.; Hoffman, Lesa

    2006-01-01

    The effect of time-related mean differences on estimates of association in cross-sectional studies has not been widely recognized in developmental and aging research. Cross-sectional studies of samples varying in age have found moderate to high levels of shared age-related variance among diverse age-related measures. These findings may be…

  11. Lead as a legendary pollutant with emerging concern: Survey of lead in tap water in an old campus building using four sampling methods.

    PubMed

    Ng, Ding-Quan; Liu, Shu-Wei; Lin, Yi-Pin

    2018-09-15

    In this study, a sampling campaign with a total of nine sampling events investigating lead in drinking water was conducted at 7 sampling locations in an old building with lead pipes in service in part of the building on the National Taiwan University campus. This study aims to assess the effectiveness of four different sampling methods, namely first draw sampling, sequential sampling, random daytime sampling and flush sampling, in lead contamination detection. In 3 out of the 7 sampling locations without lead pipe, lead could not be detected (<1.1 μg/L) in most samples regardless of the sampling methods. On the other hand, in the 4 sampling locations where lead pipes still existed, total lead concentrations >10 μg/L were consistently observed in 3 locations using any of the four sampling methods while the remaining location was identified to be contaminated using sequential sampling. High lead levels were consistently measured by the four sampling methods in the 3 locations in which particulate lead was either predominant or comparable to soluble lead. Compared to first draw and random daytime samplings, although flush sampling had a high tendency to reduce total lead in samples in lead-contaminated sites, the extent of lead reduction was location-dependent and not dependent on flush durations between 5 and 10 min. Overall, first draw sampling and random daytime sampling were reliable and effective in determining lead contamination in this study. Flush sampling could reveal the contamination if the extent is severe but tends to underestimate lead exposure risk. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    PubMed

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  13. Evaluating Multiple Imputation Models for the Southern Annual Forest Inventory

    Treesearch

    Gregory A. Reams; Joseph M. McCollum

    1999-01-01

    The USDA Forest Service's Southern Research Station is implementing an annualized forest survey in thirteen states. The sample design is a systematic sample of five interpenetrating grids (panels), where each panel is measured sequentially. For example, panel one information is collected in year one, and panel five in year five. The area representative and time...

  14. Environmental persistence of the nucleopolyhedrosis virus of the gypsy moth, Lymantria dispar L

    Treesearch

    J.D. Podgwaite; Kathleen Stone Shields; R.T. Zerillo; R.B. Bruen

    1979-01-01

    A bioassay technique was used to estimate the concentrations of infectious gypsy moth nucleopolyhedrosis virus (NPV) that occur naturaIly in leaf, bark, litter, and soil samples taken from woodland plots in Connecticut and Pennsylvania. These concentrations were then compared to those in samples taken sequentially after treatment of these plots with NPV. Results...

  15. Comparing Indirect Effects in SEM: A Sequential Model Fitting Method Using Covariance-Equivalent Specifications

    ERIC Educational Resources Information Center

    Chan, Wai

    2007-01-01

    In social science research, an indirect effect occurs when the influence of an antecedent variable on the effect variable is mediated by an intervening variable. To compare indirect effects within a sample or across different samples, structural equation modeling (SEM) can be used if the computer program supports model fitting with nonlinear…

  16. 40 CFR 761.302 - Proportion of the total surface area to sample.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...

  17. 40 CFR 761.302 - Proportion of the total surface area to sample.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...

  18. 40 CFR 761.302 - Proportion of the total surface area to sample.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...

  19. 40 CFR 761.302 - Proportion of the total surface area to sample.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...

  20. 40 CFR 761.302 - Proportion of the total surface area to sample.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...

  1. Teachers' Adoptation Level of Student Centered Education Approach

    ERIC Educational Resources Information Center

    Arseven, Zeynep; Sahin, Seyma; Kiliç, Abdurrahman

    2016-01-01

    The aim of this study is to identify how far the student centered education approach is applied in the primary, middle and high schools in Düzce. Explanatory design which is one type of mixed research methods and "sequential mixed methods sampling" were used in the study. 685 teachers constitute the research sample of the quantitative…

  2. The Cerebellar Deficit Hypothesis and Dyslexic Tendencies in a Non-Clinical Sample

    ERIC Educational Resources Information Center

    Brookes, Rebecca L.; Stirling, John

    2005-01-01

    In order to assess the relationship between cerebellar deficits and dyslexic tendencies in a non-clinical sample, 27 primary school children aged 8-9 completed a cerebellar soft signs battery and were additionally assessed for reading age, sequential memory, picture arrangement and knowledge of common sequences. An average measure of the soft…

  3. Fractionation of trace elements in agricultural soils using ultrasound assisted sequential extraction prior to inductively coupled plasma mass spectrometric determination.

    PubMed

    Matong, Joseph M; Nyaba, Luthando; Nomngongo, Philiswa N

    2016-07-01

    The main objectives of this study were to determine the concentration of fourteen trace elements and to investigate their distribution as well as a contamination levels in selected agricultural soils. An ultrasonic assisted sequential extraction procedure derived from three-step BCR method was used for fractionation of trace elements. The total concentration of trace elements in soil samples was obtained by total digestion method in soil samples with aqua regia. The results of the extractable fractions revealed that most of the target trace elements can be transferred to the human being through the food chain, thus leading to serious human health. Enrichment factor (EF), geo-accumulation index (Igeo), contamination factor (CF), risk assessment code (RAC) and individual contamination factors (ICF) were used to assess the environmental impacts of trace metals in soil samples. The EF revealed that Cd was enriched by 3.1-7.2 (except in Soil 1). The Igeo results showed that the soils in the study area was moderately contaminated with Fe, and heavily to extremely polluted with Cd. The soil samples from the unplanted field was found to have highest contamination factor for Cd and lowest for Pb. Soil 3 showed a high risk for Tl and Cd with RAC values of greater than or equal to 50%. In addition, Fe, Ni, Cu, V, As, Mo (except Soil 2), Sb and Pb posed low environmental risk. The modified BCR sequential extraction method provided more information about mobility and environmental implication of studied trace elements in the study area. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Sub-1min separation in sequential injection chromatography for determination of synthetic water-soluble dyes in pharmaceutical formulation.

    PubMed

    Davletbaeva, Polina; Chocholouš, Petr; Bulatov, Andrey; Šatínský, Dalibor; Solich, Petr

    2017-09-05

    Sequential Injection Chromatography (SIC) evolved from fast and automated non-separation Sequential Injection Analysis (SIA) into chromatographic separation method for multi-element analysis. However, the speed of the measurement (sample throughput) is due to chromatography significantly reduced. In this paper, a sub-1min separation using medium polar cyano monolithic column (5mm×4.6mm) resulted in fast and green separation with sample throughput comparable with non-separation flow methods The separation of three synthetic water-soluble dyes (sunset yellow FCF, carmoisine and green S) was in a gradient elution mode (0.02% ammonium acetate, pH 6.7 - water) with flow rate of 3.0mLmin -1 corresponding with sample throughput of 30h -1 . Spectrophotometric detection wavelengths were set to 480, 516 and 630nm and 10Hz data collection rate. The performance of the separation was described and discussed (peak capacities 3.48-7.67, peak symmetries 1.72-1.84 and resolutions 1.42-1.88). The method was represented by validation parameters: LODs of 0.15-0.35mgL -1 , LOQs of 0.50-1.25mgL -1 , calibration ranges 0.50-150.00mgL -1 (r>0.998) and repeatability at 10.0mgL -1 of RSD≤0.98% (n=6). The method was used for determination of the dyes in "forest berries" colored pharmaceutical cough-cold formulation. The sample matrix - pharmaceuticals and excipients were not interfering with vis determination because of no retention in the separation column and colorless nature. The results proved the concept of fast and green chromatography approach using very short medium polar monolithic column in SIC. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Development of a rapid method for the sequential extraction and subsequent quantification of fatty acids and sugars from avocado mesocarp tissue.

    PubMed

    Meyer, Marjolaine D; Terry, Leon A

    2008-08-27

    Methods devised for oil extraction from avocado (Persea americana Mill.) mesocarp (e.g., Soxhlet) are usually lengthy and require operation at high temperature. Moreover, methods for extracting sugars from avocado tissue (e.g., 80% ethanol, v/v) do not allow for lipids to be easily measured from the same sample. This study describes a new simple method that enabled sequential extraction and subsequent quantification of both fatty acids and sugars from the same avocado mesocarp tissue sample. Freeze-dried mesocarp samples of avocado cv. Hass fruit of different ripening stages were extracted by homogenization with hexane and the oil extracts quantified for fatty acid composition by GC. The resulting filter residues were readily usable for sugar extraction with methanol (62.5%, v/v). For comparison, oil was also extracted using the standard Soxhlet technique and the resulting thimble residue extracted for sugars as before. An additional experiment was carried out whereby filter residues were also extracted using ethanol. Average oil yield using the Soxhlet technique was significantly (P < 0.05) higher than that obtained by homogenization with hexane, although the difference remained very slight, and fatty acid profiles of the oil extracts following both methods were very similar. Oil recovery improved with increasing ripeness of the fruit with minor differences observed in the fatty acid composition during postharvest ripening. After lipid removal, methanolic extraction was superior in recovering sucrose and perseitol as compared to 80% ethanol (v/v), whereas mannoheptulose recovery was not affected by solvent used. The method presented has the benefits of shorter extraction time, lower extraction temperature, and reduced amount of solvent and can be used for sequential extraction of fatty acids and sugars from the same sample.

  6. Cost-effective binomial sequential sampling of western bean cutworm, Striacosta albicosta (Lepidoptera: Noctuidae), egg masses in corn.

    PubMed

    Paula-Moraes, S; Burkness, E C; Hunt, T E; Wright, R J; Hein, G L; Hutchison, W D

    2011-12-01

    Striacosta albicosta (Smith) (Lepidoptera: Noctuidae), is a native pest of dry beans (Phaseolus vulgaris L.) and corn (Zea mays L.). As a result of larval feeding damage on corn ears, S. albicosta has a narrow treatment window; thus, early detection of the pest in the field is essential, and egg mass sampling has become a popular monitoring tool. Three action thresholds for field and sweet corn currently are used by crop consultants, including 4% of plants infested with egg masses on sweet corn in the silking-tasseling stage, 8% of plants infested with egg masses on field corn with approximately 95% tasseled, and 20% of plants infested with egg masses on field corn during mid-milk-stage corn. The current monitoring recommendation is to sample 20 plants at each of five locations per field (100 plants total). In an effort to develop a more cost-effective sampling plan for S. albicosta egg masses, several alternative binomial sampling plans were developed using Wald's sequential probability ratio test, and validated using Resampling for Validation of Sampling Plans (RVSP) software. The benefit-cost ratio also was calculated and used to determine the final selection of sampling plans. Based on final sampling plans selected for each action threshold, the average sample number required to reach a treat or no-treat decision ranged from 38 to 41 plants per field. This represents a significant savings in sampling cost over the current recommendation of 100 plants.

  7. Particle filters, a quasi-Monte-Carlo-solution for segmentation of coronaries.

    PubMed

    Florin, Charles; Paragios, Nikos; Williams, Jim

    2005-01-01

    In this paper we propose a Particle Filter-based approach for the segmentation of coronary arteries. To this end, successive planes of the vessel are modeled as unknown states of a sequential process. Such states consist of the orientation, position, shape model and appearance (in statistical terms) of the vessel that are recovered in an incremental fashion, using a sequential Bayesian filter (Particle Filter). In order to account for bifurcations and branchings, we consider a Monte Carlo sampling rule that propagates in parallel multiple hypotheses. Promising results on the segmentation of coronary arteries demonstrate the potential of the proposed approach.

  8. Novel Designs of Quantum Reversible Counters

    NASA Astrophysics Data System (ADS)

    Qi, Xuemei; Zhu, Haihong; Chen, Fulong; Zhu, Junru; Zhang, Ziyang

    2016-11-01

    Reversible logic, as an interesting and important issue, has been widely used in designing combinational and sequential circuits for low-power and high-speed computation. Though a significant number of works have been done on reversible combinational logic, the realization of reversible sequential circuit is still at premature stage. Reversible counter is not only an important part of the sequential circuit but also an essential part of the quantum circuit system. In this paper, we designed two kinds of novel reversible counters. In order to construct counter, the innovative reversible T Flip-flop Gate (TFG), T Flip-flop block (T_FF) and JK flip-flop block (JK_FF) are proposed. Based on the above blocks and some existing reversible gates, the 4-bit binary-coded decimal (BCD) counter and controlled Up/Down synchronous counter are designed. With the help of Verilog hardware description language (Verilog HDL), these counters above have been modeled and confirmed. According to the simulation results, our circuits' logic structures are validated. Compared to the existing ones in terms of quantum cost (QC), delay (DL) and garbage outputs (GBO), it can be concluded that our designs perform better than the others. There is no doubt that they can be used as a kind of important storage components to be applied in future low-power computing systems.

  9. Enduring Advantages of Early Cochlear Implantation for Spoken Language Development

    PubMed Central

    Geers, Ann E.; Nicholas, Johanna G.

    2013-01-01

    Purpose To determine whether the precise age of implantation (AOI) remains an important predictor of spoken language outcomes in later childhood for those who received a cochlear implant (CI) between 12–38 months of age. Relative advantages of receiving a bilateral CI after age 4.5, better pre-CI aided hearing, and longer CI experience were also examined. Method Sixty children participated in a prospective longitudinal study of outcomes at 4.5 and 10.5 years of age. Twenty-nine children received a sequential second CI. Test scores were compared to normative samples of hearing age-mates and predictors of outcomes identified. Results Standard scores on language tests at 10.5 years of age remained significantly correlated with age of first cochlear implantation. Scores were not associated with receipt of a second, sequentially-acquired CI. Significantly higher scores were achieved for vocabulary as compared with overall language, a finding not evident when the children were tested at younger ages. Conclusion Age-appropriate spoken language skills continued to be more likely with younger AOI, even after an average of 8.6 years of additional CI use. Receipt of a second implant between ages 4–10 years and longer duration of device use did not provide significant added benefit. PMID:23275406

  10. The Potential of Sequential Extraction in the Characterisation and Management of Wastes from Steel Processing: A Prospective Review

    PubMed Central

    Rodgers, Kiri J.; Hursthouse, Andrew; Cuthbert, Simon

    2015-01-01

    As waste management regulations become more stringent, yet demand for resources continues to increase, there is a pressing need for innovative management techniques and more sophisticated supporting analysis techniques. Sequential extraction (SE) analysis, a technique previously applied to soils and sediments, offers the potential to gain a better understanding of the composition of solid wastes. SE attempts to classify potentially toxic elements (PTEs) by their associations with phases or fractions in waste, with the aim of improving resource use and reducing negative environmental impacts. In this review we explain how SE can be applied to steel wastes. These present challenges due to differences in sample characteristics compared with materials to which SE has been traditionally applied, specifically chemical composition, particle size and pH buffering capacity, which are critical when identifying a suitable SE method. We highlight the importance of delineating iron-rich phases, and find that the commonly applied BCR (The community Bureau of reference) extraction method is problematic due to difficulties with zinc speciation (a critical steel waste constituent), hence a substantially modified SEP is necessary to deal with particular characteristics of steel wastes. Successful development of SE for steel wastes could have wider implications, e.g., for the sustainable management of fly ash and mining wastes. PMID:26393631

  11. The Potential of Sequential Extraction in the Characterisation and Management of Wastes from Steel Processing: A Prospective Review.

    PubMed

    Rodgers, Kiri J; Hursthouse, Andrew; Cuthbert, Simon

    2015-09-18

    As waste management regulations become more stringent, yet demand for resources continues to increase, there is a pressing need for innovative management techniques and more sophisticated supporting analysis techniques. Sequential extraction (SE) analysis, a technique previously applied to soils and sediments, offers the potential to gain a better understanding of the composition of solid wastes. SE attempts to classify potentially toxic elements (PTEs) by their associations with phases or fractions in waste, with the aim of improving resource use and reducing negative environmental impacts. In this review we explain how SE can be applied to steel wastes. These present challenges due to differences in sample characteristics compared with materials to which SE has been traditionally applied, specifically chemical composition, particle size and pH buffering capacity, which are critical when identifying a suitable SE method. We highlight the importance of delineating iron-rich phases, and find that the commonly applied BCR (The community Bureau of reference) extraction method is problematic due to difficulties with zinc speciation (a critical steel waste constituent), hence a substantially modified SEP is necessary to deal with particular characteristics of steel wastes. Successful development of SE for steel wastes could have wider implications, e.g., for the sustainable management of fly ash and mining wastes.

  12. Actively learning human gaze shifting paths for semantics-aware photo cropping.

    PubMed

    Zhang, Luming; Gao, Yue; Ji, Rongrong; Xia, Yingjie; Dai, Qionghai; Li, Xuelong

    2014-05-01

    Photo cropping is a widely used tool in printing industry, photography, and cinematography. Conventional cropping models suffer from the following three challenges. First, the deemphasized role of semantic contents that are many times more important than low-level features in photo aesthetics. Second, the absence of a sequential ordering in the existing models. In contrast, humans look at semantically important regions sequentially when viewing a photo. Third, the difficulty of leveraging inputs from multiple users. Experience from multiple users is particularly critical in cropping as photo assessment is quite a subjective task. To address these challenges, this paper proposes semantics-aware photo cropping, which crops a photo by simulating the process of humans sequentially perceiving semantically important regions of a photo. We first project the local features (graphlets in this paper) onto the semantic space, which is constructed based on the category information of the training photos. An efficient learning algorithm is then derived to sequentially select semantically representative graphlets of a photo, and the selecting process can be interpreted by a path, which simulates humans actively perceiving semantics in a photo. Furthermore, we learn a prior distribution of such active graphlet paths from training photos that are marked as aesthetically pleasing by multiple users. The learned priors enforce the corresponding active graphlet path of a test photo to be maximally similar to those from the training photos. Experimental results show that: 1) the active graphlet path accurately predicts human gaze shifting, and thus is more indicative for photo aesthetics than conventional saliency maps and 2) the cropped photos produced by our approach outperform its competitors in both qualitative and quantitative comparisons.

  13. [Absorption and metabolism of Chuanxiong Rhizoma decoction with multi-component sequential metabolism method].

    PubMed

    Liu, Yang; Luo, Zhi-Qiang; Lv, Bei-Ran; Zhao, Hai-Yu; Dong, Ling

    2016-04-01

    The multiple components in Chinese herbal medicines (CHMS) will experience complex absorption and metabolism before entering the blood system. Previous studies often lay emphasis on the components in blood. However, the dynamic and sequential absorption and metabolism process following multi-component oral administration has not been studied. In this study, the in situ closed-loop method combined with LC-MS techniques were employed to study the sequential process of Chuanxiong Rhizoma decoction (RCD). A total of 14 major components were identified in RCD. Among them, ferulic acid, senkyunolide J, senkyunolide I, senkyunolide F, senkyunolide G, and butylidenephthalide were detected in all of the samples, indicating that the six components could be absorbed into blood in prototype. Butylphthalide, E-ligustilide, Z-ligustilide, cnidilide, senkyunolide A and senkyunolide Q were not detected in all the samples, suggesting that the six components may not be absorbed or metabolized before entering the hepatic portal vein. Senkyunolide H could be metabolized by the liver, while senkyunolide M could be metabolized by both liver and intestinal flora. This study clearly demonstrated the changes in the absorption and metabolism process following multi-component oral administration of RCD, so as to convert the static multi-component absorption process into a comprehensive dynamic and continuous absorption and metabolism process. Copyright© by the Chinese Pharmaceutical Association.

  14. Simultaneous capture and sequential detection of two malarial biomarkers on magnetic microparticles.

    PubMed

    Markwalter, Christine F; Ricks, Keersten M; Bitting, Anna L; Mudenda, Lwiindi; Wright, David W

    2016-12-01

    We have developed a rapid magnetic microparticle-based detection strategy for malarial biomarkers Plasmodium lactate dehydrogenase (pLDH) and Plasmodium falciparum histidine-rich protein II (PfHRPII). In this assay, magnetic particles functionalized with antibodies specific for pLDH and PfHRPII as well as detection antibodies with distinct enzymes for each biomarker are added to parasitized lysed blood samples. Sandwich complexes for pLDH and PfHRPII form on the surface of the magnetic beads, which are washed and sequentially re-suspended in detection enzyme substrate for each antigen. The developed simultaneous capture and sequential detection (SCSD) assay detects both biomarkers in samples as low as 2.0parasites/µl, an order of magnitude below commercially available ELISA kits, has a total incubation time of 35min, and was found to be reproducible between users over time. This assay provides a simple and efficient alternative to traditional 96-well plate ELISAs, which take 5-8h to complete and are limited to one analyte. Further, the modularity of the magnetic bead-based SCSD ELISA format could serve as a platform for application to other diseases for which multi-biomarker detection is advantageous. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Estimation of probability of failure for damage-tolerant aerospace structures

    NASA Astrophysics Data System (ADS)

    Halbert, Keith

    The majority of aircraft structures are designed to be damage-tolerant such that safe operation can continue in the presence of minor damage. It is necessary to schedule inspections so that minor damage can be found and repaired. It is generally not possible to perform structural inspections prior to every flight. The scheduling is traditionally accomplished through a deterministic set of methods referred to as Damage Tolerance Analysis (DTA). DTA has proven to produce safe aircraft but does not provide estimates of the probability of failure of future flights or the probability of repair of future inspections. Without these estimates maintenance costs cannot be accurately predicted. Also, estimation of failure probabilities is now a regulatory requirement for some aircraft. The set of methods concerned with the probabilistic formulation of this problem are collectively referred to as Probabilistic Damage Tolerance Analysis (PDTA). The goal of PDTA is to control the failure probability while holding maintenance costs to a reasonable level. This work focuses specifically on PDTA for fatigue cracking of metallic aircraft structures. The growth of a crack (or cracks) must be modeled using all available data and engineering knowledge. The length of a crack can be assessed only indirectly through evidence such as non-destructive inspection results, failures or lack of failures, and the observed severity of usage of the structure. The current set of industry PDTA tools are lacking in several ways: they may in some cases yield poor estimates of failure probabilities, they cannot realistically represent the variety of possible failure and maintenance scenarios, and they do not allow for model updates which incorporate observed evidence. A PDTA modeling methodology must be flexible enough to estimate accurately the failure and repair probabilities under a variety of maintenance scenarios, and be capable of incorporating observed evidence as it becomes available. This dissertation describes and develops new PDTA methodologies that directly address the deficiencies of the currently used tools. The new methods are implemented as a free, publicly licensed and open source R software package that can be downloaded from the Comprehensive R Archive Network. The tools consist of two main components. First, an explicit (and expensive) Monte Carlo approach is presented which simulates the life of an aircraft structural component flight-by-flight. This straightforward MC routine can be used to provide defensible estimates of the failure probabilities for future flights and repair probabilities for future inspections under a variety of failure and maintenance scenarios. This routine is intended to provide baseline estimates against which to compare the results of other, more efficient approaches. Second, an original approach is described which models the fatigue process and future scheduled inspections as a hidden Markov model. This model is solved using a particle-based approximation and the sequential importance sampling algorithm, which provides an efficient solution to the PDTA problem. Sequential importance sampling is an extension of importance sampling to a Markov process, allowing for efficient Bayesian updating of model parameters. This model updating capability, the benefit of which is demonstrated, is lacking in other PDTA approaches. The results of this approach are shown to agree with the results of the explicit Monte Carlo routine for a number of PDTA problems. Extensions to the typical PDTA problem, which cannot be solved using currently available tools, are presented and solved in this work. These extensions include incorporating observed evidence (such as non-destructive inspection results), more realistic treatment of possible future repairs, and the modeling of failure involving more than one crack (the so-called continuing damage problem). The described hidden Markov model / sequential importance sampling approach to PDTA has the potential to improve aerospace structural safety and reduce maintenance costs by providing a more accurate assessment of the risk of failure and the likelihood of repairs throughout the life of an aircraft.

  16. Developing a Systematic Corrosion Control Evaluation Approach in Flint

    EPA Science Inventory

    Presentation covers what the projects were that were recommended by the Flint Safe Drinking Water Task Force for corrosion control assessment for Flint, focusing on the sequential sampling project, the pipe rigs, and pipe scale analyses.

  17. Sequential voluntary cough and aspiration or aspiration risk in Parkinson's disease.

    PubMed

    Hegland, Karen Wheeler; Okun, Michael S; Troche, Michelle S

    2014-08-01

    Disordered swallowing, or dysphagia, is almost always present to some degree in people with Parkinson's disease (PD), either causing aspiration or greatly increasing the risk for aspiration during swallowing. This likely contributes to aspiration pneumonia, a leading cause of death in this patient population. Effective airway protection is dependent upon multiple behaviors, including cough and swallowing. Single voluntary cough function is disordered in people with PD and dysphagia. However, the appropriate response to aspirate material is more than one cough, or sequential cough. The goal of this study was to examine voluntary sequential coughing in people with PD, with and without dysphagia. Forty adults diagnosed with idiopathic PD produced two trials of sequential voluntary cough. The cough airflows were obtained using pneumotachograph and facemask and subsequently digitized and recorded. All participants received a modified barium swallow study as part of their clinical care, and the worst penetration-aspiration score observed was used to determine whether the patient had dysphagia. There were significant differences in the compression phase duration, peak expiratory flow rates, and amount of air expired of the sequential cough produced by participants with and without dysphagia. The presence of dysphagia in people with PD is associated with disordered cough function. Sequential cough, which is important in removing aspirate material from large- and smaller-diameter airways, is also impaired in people with PD and dysphagia compared with those without dysphagia. There may be common neuroanatomical substrates for cough and swallowing impairment in PD leading to the co-occurrence of these dysfunctions.

  18. A common mechanism underlies changes of mind about decisions and confidence.

    PubMed

    van den Berg, Ronald; Anandalingam, Kavitha; Zylberberg, Ariel; Kiani, Roozbeh; Shadlen, Michael N; Wolpert, Daniel M

    2016-02-01

    Decisions are accompanied by a degree of confidence that a selected option is correct. A sequential sampling framework explains the speed and accuracy of decisions and extends naturally to the confidence that the decision rendered is likely to be correct. However, discrepancies between confidence and accuracy suggest that confidence might be supported by mechanisms dissociated from the decision process. Here we show that this discrepancy can arise naturally because of simple processing delays. When participants were asked to report choice and confidence simultaneously, their confidence, reaction time and a perceptual decision about motion were explained by bounded evidence accumulation. However, we also observed revisions of the initial choice and/or confidence. These changes of mind were explained by a continuation of the mechanism that led to the initial choice. Our findings extend the sequential sampling framework to vacillation about confidence and invites caution in interpreting dissociations between confidence and accuracy.

  19. University Students' Views on the Education and Teaching of Civilization History: Bayburt University Education Faculty Sample

    ERIC Educational Resources Information Center

    Elban, Mehmet

    2017-01-01

    The purpose of this research is to evaluate the teaching and educational activities in the civilization history lesson. The model of the research is the exploratory sequential design from mixed research patterns. The appropriate sampling method was used in the research. The qualitative data of the research were collected from 26 students through a…

  20. Development and validation of a fixed-precision sequential sampling plan for estimating brood adult density of Dendroctonus pseudotsugae (Coleoptera: Scolytidae)

    Treesearch

    Jose F. Negron; Willis C. Schaupp; Erik Johnson

    2000-01-01

    The Douglas-fir beetle, Dendroctonus pseudotsugae Hopkins, attacks Douglas-fir, Pseudotsuga menziesii (Mirb.) Franco (Pinaceae), throughout western North America. Periodic outbreaks cause increased mortality of its host. Land managers and forest health specialists often need to determine population trends of this insect. Bark samples were obtained from 326 trees...

  1. XANES Spectroscopic Analysis of Phosphorus Speciation in Alum-Amended Poultry Litter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seiter,J.; Staats-Borda, K.; Ginder-Vogel, M.

    2008-01-01

    Aluminum sulfate (alum; Al2(SO4)3{center_dot}14H2O) is used as a chemical treatment of poultry litter to reduce the solubility and release of phosphate, thereby minimizing the impacts on adjacent aquatic ecosystems when poultry litter is land applied as a crop fertilizer. The objective of this study was to determine, through the use of X-ray absorption near edge structure (XANES) spectroscopy and sequential extraction, how alum amendments alter P distribution and solid-state speciation within the poultry litter system. Our results indicate that traditional sequential fractionation procedures may not account for variability in P speciation in heterogeneous animal manures. Analysis shows that NaOH-extracted Pmore » in alum amended litters is predominantly organic ({approx}80%), whereas in the control samples, >60% of NaOH-extracted P was inorganic P. Linear least squares fitting (LLSF) analysis of spectra collected of sequentially extracted litters showed that the P is present in inorganic (P sorbed on Al oxides, calcium phosphates) and organic forms (phytic acid, polyphosphates, and monoesters) in alum- and non-alum-amended poultry litter. When determining land application rates of poultry litter, all of these compounds must be considered, especially organic P. Results of the sequential extractions in conjunction with LLSF suggest that no P species is completely removed by a single extractant. Rather, there is a continuum of removal as extractant strength increases. Overall, alum-amended litters exhibited higher proportions of Al-bound P species and phytic acid, whereas untreated samples contained Ca-P minerals and organic P compounds. This study provides in situ information about P speciation in the poultry litter solid and about P availability in alum- and non-alum-treated poultry litter that will dictate P losses to ground and surface water systems.« less

  2. Evaluation of the procedure 1A component of the 1980 US/Canada wheat and barley exploratory experiment

    NASA Technical Reports Server (NTRS)

    Chapman, G. M. (Principal Investigator); Carnes, J. G.

    1981-01-01

    Several techniques which use clusters generated by a new clustering algorithm, CLASSY, are proposed as alternatives to random sampling to obtain greater precision in crop proportion estimation: (1) Proportional Allocation/relative count estimator (PA/RCE) uses proportional allocation of dots to clusters on the basis of cluster size and a relative count cluster level estimate; (2) Proportional Allocation/Bayes Estimator (PA/BE) uses proportional allocation of dots to clusters and a Bayesian cluster-level estimate; and (3) Bayes Sequential Allocation/Bayesian Estimator (BSA/BE) uses sequential allocation of dots to clusters and a Bayesian cluster level estimate. Clustering in an effective method in making proportion estimates. It is estimated that, to obtain the same precision with random sampling as obtained by the proportional sampling of 50 dots with an unbiased estimator, samples of 85 or 166 would need to be taken if dot sets with AI labels (integrated procedure) or ground truth labels, respectively were input. Dot reallocation provides dot sets that are unbiased. It is recommended that these proportion estimation techniques are maintained, particularly the PA/BE because it provides the greatest precision.

  3. Rapid sequential determination of arsenic and selenium in waters and plant digests by hydride generation inductively coupled plasma-mass spectrometry

    NASA Astrophysics Data System (ADS)

    Menegário, Amauri A.; Giné, Maria Fernanda

    2000-04-01

    A synchronised flow system with hydride generation coupled to ICP-MS is proposed for the sequential determination of As and Se in natural waters and plant digests. The alternated mixing of the sample solution with thiourea or HCl for the determination of As or Se under optimized conditions was achieved using a flow commutator before the reaction with NaBH 4. The on-line addition of thiourea promoted the quantitative reduction of As(V) to As(III), thus enhancing sensitivity and precision. The selenium pre-reduction from Se(VI) to Se(IV) was produced by heating the sample with HCl, and the hydride generation was performed in 4 mol l -1 HCl, thus avoiding interference from thiourea. The system allowed the analysis of 20 samples h -1 with LOD values of 0.02 μg l -1 As and 0.03 μg l -1 Se. Results were in agreement with the certified values at the 95% confidence level for reference waters from the Canadian National Water Research Institute and plant samples from the National Institute of Standards and Technology (NIST).

  4. Mercury Speciation by X-ray Absorption Fine Structure Spectroscopy and Sequential Chemical Extractions: A Comparison of Speciation Methods

    USGS Publications Warehouse

    Kim, C.S.; Bloom, N.S.; Rytuba, J.J.; Brown, Gordon E.

    2003-01-01

    Determining the chemical speciation of mercury in contaminated mining and industrial environments is essential for predicting its solubility, transport behavior, and potential bioavailability as well as for designing effective remediation strategies. In this study, two techniques for determining Hg speciation-X-ray absorption fine structure (XAFS) spectroscopy and sequential chemical extractions (SCE)-are independently applied to a set of samples with Hg concentrations ranging from 132 to 7539 mg/kg to determine if the two techniques provide comparable Hg speciation results. Generally, the proportions of insoluble HgS (cinnabar, metacinnabar) and HgSe identified by XAFS correlate well with the proportion of Hg removed in the aqua regia extraction demonstrated to remove HgS and HgSe. Statistically significant (> 10%) differences are observed however in samples containing more soluble Hg-containing phases (HgCl2, HgO, Hg3S2O 4). Such differences may be related to matrix, particle size, or crystallinity effects, which could affect the apparent solubility of Hg phases present. In more highly concentrated samples, microscopy techniques can help characterize the Hg-bearing species in complex multiphase natural samples.

  5. Moving forward: response to "Studying eyewitness investigations in the field".

    PubMed

    Ross, Stephen J; Malpass, Roy S

    2008-02-01

    Field studies of eyewitness identification are richly confounded. Determining which confounds undermine interpretation is important. The blind administration confound in the Illinois study is said to undermine it's value for understanding the relative utility of simultaneous and sequential lineups. Most criticisms of the Illinois study focus on filler identifications, and related inferences about the importance of the blind confound. We find no convincing evidence supporting this line of attack and wonder at filler identifications as the major line of criticism. More debilitating problems impede using the Illinois study to address the simultaneous versus sequential lineup controversy: inability to estimate guilt independent of identification evidence, lack of protocol compliance monitoring, and assessment of lineups quality. Moving forward requires removing these limitations.

  6. Effect of Drying on Heavy Metal Fraction Distribution in Rice Paddy Soil

    PubMed Central

    Qi, Yanbing; Huang, Biao; Darilek, Jeremy Landon

    2014-01-01

    An understanding of how redox conditions affect soil heavy metal fractions in rice paddies is important due to its implications for heavy metal mobility and plant uptake. Rice paddy soil samples routinely undergo oxidation prior to heavy metal analysis. Fraction distribution of Cu, Pb, Ni, and Cd from paddy soil with a wide pH range was investigated. Samples were both dried according to standard protocols and also preserved under anaerobic conditions through the sampling and analysis process and heavy metals were then sequentially extracted for the exchangeable and carbonate bound fraction (acid soluble fraction), iron and manganese oxide bound fraction (reducible fraction), organic bound fraction (oxidizable fraction), and residual fraction. Fractions were affected by redox conditions across all pH ranges. Drying decreased reducible fraction of all heavy metals. Curesidual fraction, Pboxidizable fraction, Cdresidual fraction, and Niresidual fraction increased by 25%, 33%, 35%, and >60%, respectively. Pbresidual fraction, Niacid soluble fraction, and Cdoxidizable fraction decreased 33%, 25%, and 15%, respectively. Drying paddy soil prior to heavy metal analysis overestimated Pb and underestimated Cu, Ni, and Cd. In future studies, samples should be stored after injecting N2 gas to maintain the redox potential of soil prior to heavy metal analysis, and investigate the correlation between heavy metal fraction distribution under field conditions and air-dried samples. PMID:24823670

  7. Detection of ricin in food using electrochemiluminescence-based technology.

    PubMed

    Garber, Eric A E; O'Brien, Thomas W

    2008-01-01

    Ricin is a toxic ribosome inactivating protein (RIP-II) present in beans of the castor plant, Ricinus communis. Its potential as a biodefense threat has made the rapid, sensitive detection of ricin in food important to the U.S. Food and Drug Administration. Samples of juice, dairy products, soda, vegetables, bakery products, chocolate, and condiments were spiked with varying concentrations of ricin and analyzed using a 96-well format, electrochemiluminescence (ECL) immunoassay. Assay configurations included the use of a monoclonal capture antibody coupled with either a polyclonal or monoclonal detector antibody. The samples and detector antibodies were either added sequentially or in combination during the capture step. Using the polyclonal antibody, 0.04 ng/mL ricin was detected in analytical samples prepared from several beverages. By simultaneously incubating the sample with detector antibody, it was possible to decrease the assay time to a single 20 min incubation step with a limit of detection <10 ng/mL. Assays run according to this single incubation step exhibited a hook effect (decrease in signal at high concentrations of ricin), but because of the large signal-to-noise ratio associated with the ECL assay, the response remained above background and detectable. Thus, the ECL assay was uniquely suited for the screening of samples for ricin.

  8. Analyzing Kernel Matrices for the Identification of Differentially Expressed Genes

    PubMed Central

    Xia, Xiao-Lei; Xing, Huanlai; Liu, Xueqin

    2013-01-01

    One of the most important applications of microarray data is the class prediction of biological samples. For this purpose, statistical tests have often been applied to identify the differentially expressed genes (DEGs), followed by the employment of the state-of-the-art learning machines including the Support Vector Machines (SVM) in particular. The SVM is a typical sample-based classifier whose performance comes down to how discriminant samples are. However, DEGs identified by statistical tests are not guaranteed to result in a training dataset composed of discriminant samples. To tackle this problem, a novel gene ranking method namely the Kernel Matrix Gene Selection (KMGS) is proposed. The rationale of the method, which roots in the fundamental ideas of the SVM algorithm, is described. The notion of ''the separability of a sample'' which is estimated by performing -like statistics on each column of the kernel matrix, is first introduced. The separability of a classification problem is then measured, from which the significance of a specific gene is deduced. Also described is a method of Kernel Matrix Sequential Forward Selection (KMSFS) which shares the KMGS method's essential ideas but proceeds in a greedy manner. On three public microarray datasets, our proposed algorithms achieved noticeably competitive performance in terms of the B.632+ error rate. PMID:24349110

  9. Adaptive decision making in a dynamic environment: a test of a sequential sampling model of relative judgment.

    PubMed

    Vuckovic, Anita; Kwantes, Peter J; Neal, Andrew

    2013-09-01

    Research has identified a wide range of factors that influence performance in relative judgment tasks. However, the findings from this research have been inconsistent. Studies have varied with respect to the identification of causal variables and the perceptual and decision-making mechanisms underlying performance. Drawing on the ecological rationality approach, we present a theory of the judgment and decision-making processes involved in a relative judgment task that explains how people judge a stimulus and adapt their decision process to accommodate their own uncertainty associated with those judgments. Undergraduate participants performed a simulated air traffic control conflict detection task. Across two experiments, we systematically manipulated variables known to affect performance. In the first experiment, we manipulated the relative distances of aircraft to a common destination while holding aircraft speeds constant. In a follow-up experiment, we introduced a direct manipulation of relative speed. We then fit a sequential sampling model to the data, and used the best fitting parameters to infer the decision-making processes responsible for performance. Findings were consistent with the theory that people adapt to their own uncertainty by adjusting their criterion and the amount of time they take to collect evidence in order to make a more accurate decision. From a practical perspective, the paper demonstrates that one can use a sequential sampling model to understand performance in a dynamic environment, allowing one to make sense of and interpret complex patterns of empirical findings that would otherwise be difficult to interpret using standard statistical analyses. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  10. A rapid method for the sequential separation of polonium, plutonium, americium and uranium in drinking water.

    PubMed

    Lemons, B; Khaing, H; Ward, A; Thakur, P

    2018-06-01

    A new sequential separation method for the determination of polonium and actinides (Pu, Am and U) in drinking water samples has been developed that can be used for emergency response or routine water analyses. For the first time, the application of TEVA chromatography column in the sequential separation of polonium and plutonium has been studied. This method utilizes a rapid Fe +3 co-precipitation step to remove matrix interferences, followed by plutonium oxidation state adjustment to Pu 4+ and an incubation period of ~ 1 h at 50-60 °C to allow Po 2+ to oxidize to Po 4+ . The polonium and plutonium were then separated on a TEVA column, while separation of americium from uranium was performed on a TRU column. After separation, polonium was micro-precipitated with copper sulfide (CuS), while actinides were micro co-precipitated using neodymium fluoride (NdF 3 ) for counting by the alpha spectrometry. The method is simple, robust and can be performed quickly with excellent removal of interferences, high chemical recovery and very good alpha peak resolution. The efficiency and reliability of the procedures were tested by using spiked samples. The effect of several transition metals (Cu 2+ , Pb 2+ , Fe 3+ , Fe 2+ , and Ni 2+ ) on the performance of this method were also assessed to evaluate the potential matrix effects. Studies indicate that presence of up to 25 mg of these cations in the samples had no adverse effect on the recovery or the resolution of polonium alpha peaks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Optimizing trial design in pharmacogenetics research: comparing a fixed parallel group, group sequential, and adaptive selection design on sample size requirements.

    PubMed

    Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit

    2013-01-01

    Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Heterogeneous autoregressive model with structural break using nearest neighbor truncation volatility estimators for DAX.

    PubMed

    Chin, Wen Cheong; Lee, Min Cherng; Yap, Grace Lee Ching

    2016-01-01

    High frequency financial data modelling has become one of the important research areas in the field of financial econometrics. However, the possible structural break in volatile financial time series often trigger inconsistency issue in volatility estimation. In this study, we propose a structural break heavy-tailed heterogeneous autoregressive (HAR) volatility econometric model with the enhancement of jump-robust estimators. The breakpoints in the volatility are captured by dummy variables after the detection by Bai-Perron sequential multi breakpoints procedure. In order to further deal with possible abrupt jump in the volatility, the jump-robust volatility estimators are composed by using the nearest neighbor truncation approach, namely the minimum and median realized volatility. Under the structural break improvements in both the models and volatility estimators, the empirical findings show that the modified HAR model provides the best performing in-sample and out-of-sample forecast evaluations as compared with the standard HAR models. Accurate volatility forecasts have direct influential to the application of risk management and investment portfolio analysis.

  13. Cellular and Molecular Changes in Orthodontic Tooth Movement

    PubMed Central

    Zainal Ariffin, Shahrul Hisham; Yamamoto, Zulham; Zainol Abidin, lntan Zarina; Megat Abdul Wahab, Rohaya; Zainal Ariffin, Zaidah

    2011-01-01

    Tooth movement induced by orthodontic treatment can cause sequential reactions involving the periodontal tissue and alveolar bone, resulting in the release of numerous substances from the dental tissues and surrounding structures. To better understand the biological processes involved in orthodontic treatment, improve treatment, and reduce adverse side effects, several of these substances have been proposed as biomarkers. Potential biological markers can be collected from different tissue samples, and suitable sampling is important to accurately reflect biological processes. This paper covers the tissue changes that are involved during orthodontic tooth movement such as at compression region (involving osteoblasts), tension region (involving osteoclasts), dental root, and pulp tissues. Besides, the involvement of stem cells and their development towards osteoblasts and osteoclasts during orthodontic treatment have also been explained. Several possible biomarkers representing these biological changes during specific phenomenon, that is, bone remodelling (formation and resorption), inflammation, and root resorption have also been proposed. The knowledge of these biomarkers could be used in accelerating orthodontic treatment. PMID:22125437

  14. Immediately sequential bilateral cataract surgery: advantages and disadvantages.

    PubMed

    Singh, Ranjodh; Dohlman, Thomas H; Sun, Grace

    2017-01-01

    The number of cataract surgeries performed globally will continue to rise to meet the needs of an aging population. This increased demand will require healthcare systems and providers to find new surgical efficiencies while maintaining excellent surgical outcomes. Immediately sequential bilateral cataract surgery (ISBCS) has been proposed as a solution and is increasingly being performed worldwide. The purpose of this review is to discuss the advantages and disadvantages of ISBCS. When appropriate patient selection occurs and guidelines are followed, ISBCS is comparable with delayed sequential bilateral cataract surgery in long-term patient satisfaction, visual acuity and complication rates. In addition, the risk of bilateral postoperative endophthalmitis and concerns of poorer refractive outcomes have not been supported by the literature. ISBCS is cost-effective for the patient, healthcare payors and society, but current reimbursement models in many countries create significant financial barriers for facilities and surgeons. As demand for cataract surgery rises worldwide, ISBCS will become increasingly important as an alternative to delayed sequential bilateral cataract surgery. Advantages include potentially decreased wait times for surgery, patient convenience and cost savings for healthcare payors. Although they are comparable in visual acuity and complication rates, hurdles that prevent wide adoption include liability concerns as ISBCS is not an established standard of care, economic constraints for facilities and surgeons and inability to fine-tune intraocular lens selection in the second eye. Given these considerations, an open discussion regarding the advantages and disadvantages of ISBCS is important for appropriate patient selection.

  15. Secretome analysis of Trichoderma reesei and Aspergillus niger cultivated by submerged and sequential fermentation processes: Enzyme production for sugarcane bagasse hydrolysis.

    PubMed

    Florencio, Camila; Cunha, Fernanda M; Badino, Alberto C; Farinas, Cristiane S; Ximenes, Eduardo; Ladisch, Michael R

    2016-08-01

    Cellulases and hemicellulases from Trichoderma reesei and Aspergillus niger have been shown to be powerful enzymes for biomass conversion to sugars, but the production costs are still relatively high for commercial application. The choice of an effective microbial cultivation process employed for enzyme production is important, since it may affect titers and the profile of protein secretion. We used proteomic analysis to characterize the secretome of T. reesei and A. niger cultivated in submerged and sequential fermentation processes. The information gained was key to understand differences in hydrolysis of steam exploded sugarcane bagasse for enzyme cocktails obtained from two different cultivation processes. The sequential process for cultivating A. niger gave xylanase and β-glucosidase activities 3- and 8-fold higher, respectively, than corresponding activities from the submerged process. A greater protein diversity of critical cellulolytic and hemicellulolytic enzymes were also observed through secretome analyses. These results helped to explain the 3-fold higher yield for hydrolysis of non-washed pretreated bagasse when combined T. reesei and A. niger enzyme extracts from sequential fermentation were used in place of enzymes obtained from submerged fermentation. An enzyme loading of 0.7 FPU cellulase activity/g glucan was surprisingly effective when compared to the 5-15 times more enzyme loadings commonly reported for other cellulose hydrolysis studies. Analyses showed that more than 80% consisted of proteins other than cellulases whose role is important to the hydrolysis of a lignocellulose substrate. Our work combined proteomic analyses and enzymology studies to show that sequential and submerged cultivation methods differently influence both titers and secretion profile of key enzymes required for the hydrolysis of sugarcane bagasse. The higher diversity of feruloyl esterases, xylanases and other auxiliary hemicellulolytic enzymes observed in the enzyme mixtures from the sequential fermentation could be one major reason for the more efficient enzyme hydrolysis that results when using the combined secretomes from A. niger and T. reesei. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Solid sorbent air sampler

    NASA Technical Reports Server (NTRS)

    Galen, T. J. (Inventor)

    1986-01-01

    A fluid sampler for collecting a plurality of discrete samples over separate time intervals is described. The sampler comprises a sample assembly having an inlet and a plurality of discreet sample tubes each of which has inlet and outlet sides. A multiport dual acting valve is provided in the sampler in order to sequentially pass air from the sample inlet into the selected sample tubes. The sample tubes extend longitudinally of the housing and are located about the outer periphery thereof so that upon removal of an enclosure cover, they are readily accessible for operation of the sampler in an analysis mode.

  17. Isotopic analysis of N and O in nitrite and nitrate by sequential selective bacterial reduction to N2O

    USGS Publications Warehouse

    Böhlke, J.K.; Smith, R.L.; Hannon, J.E.

    2007-01-01

    Nitrite is an important intermediate species in the biogeochemical cycling of nitrogen, but its role in natural aquatic systems is poorly understood. Isotopic data can be used to study the sources and transformations of NO2- in the environment, but methods for independent isotopic analyses of NO2- in the presence of other N species are still new and evolving. This study demonstrates that isotopic analyses of N and O in NO2- can be done by treating whole freshwater or saltwater samples with the denitrifying bacterium Stenotrophomonas nitritireducens, which selectively reduces NO2- to N2O for isotope ratio mass spectrometry. When calibrated with solutions containing NO2- with known isotopic compositions determined independently, reproducible δ15N and δ18O values were obtained at both natural-abundance levels (±0.2−0.5‰ for δ15N and ±0.4−1.0‰ for δ18O) and moderately enriched 15N tracer levels (±20−50‰ for δ15N near 5000‰) for 5−20 nmol of NO2- (1−20 μmol/L in 1−5 mL aliquots). This method is highly selective for NO2-and was used for mixed samples containing both NO2- and NO3- with little or no measurable cross-contamination. In addition, mixed samples that were analyzed with S. nitritireducens were treated subsequently with Pseudomonas aureofaciens to reduce the NO3- in the absence of NO2-, providing isotopic analyses of NO2- and NO3- separately in the same aliquot. Sequential bacterial reduction methods like this one should be useful for a variety of isotopic studies aimed at understanding nitrogen cycling in aquatic environments. A test of these methods in an agricultural watershed in Indiana provides isotopic evidence for both nitrification and denitrification as sources of NO2- in a small stream.

  18. A Simple Automated Method for the Determination of Nitrate and Nitrite in Infant Formula and Milk Powder Using Sequential Injection Analysis

    PubMed Central

    Pistón, Mariela; Mollo, Alicia; Knochen, Moisés

    2011-01-01

    A fast and efficient automated method using a sequential injection analysis (SIA) system, based on the Griess, reaction was developed for the determination of nitrate and nitrite in infant formulas and milk powder. The system enables to mix a measured amount of sample (previously constituted in the liquid form and deproteinized) with the chromogenic reagent to produce a colored substance whose absorbance was recorded. For nitrate determination, an on-line prereduction step was added by passing the sample through a Cd minicolumn. The system was controlled from a PC by means of a user-friendly program. Figures of merit include linearity (r2 > 0.999 for both analytes), limits of detection (0.32 mg kg−1 NO3-N, and 0.05 mg kg−1 NO2-N), and precision (sr%) 0.8–3.0. Results were statistically in good agreement with those obtained with the reference ISO-IDF method. The sampling frequency was 30 hour−1 (nitrate) and 80 hour−1 (nitrite) when performed separately. PMID:21960750

  19. Split Flow Online Solid-Phase Extraction Coupled with Inductively Coupled Plasma Mass Spectrometry System for One-Shot Data Acquisition of Quantification and Recovery Efficiency.

    PubMed

    Furukawa, Makoto; Takagai, Yoshitaka

    2016-10-04

    Online solid-phase extraction (SPE) coupled with inductively coupled plasma mass spectrometry (ICPMS) is a useful tool in automatic sequential analysis. However, it cannot simultaneously quantify the analytical targets and their recovery percentages (R%) in one-shot samples. We propose a system that simultaneously acquires both data in a single sample injection. The main flowline of the online solid-phase extraction is divided into main and split flows. The split flow line (i.e., bypass line), which circumvents the SPE column, was placed on the main flow line. Under program-controlled switching of the automatic valve, the ICPMS sequentially measures the targets in a sample before and after column preconcentration and determines the target concentrations and the R% on the SPE column. This paper describes the system development and two demonstrations to exhibit the analytical significance, i.e., the ultratrace amounts of radioactive strontium ( 90 Sr) using commercial Sr-trap resin and multielement adsorbability on the SPE column. This system is applicable to other flow analyses and detectors in online solid phase extraction.

  20. Suppression of surface microstructure evolution in W and W-Ta alloys during simultaneous and sequential He and D ion irradiation in fusion relevant conditions

    NASA Astrophysics Data System (ADS)

    Gonderman, S.; Tripathi, J. K.; Sizyuk, T.; Hassanein, A.

    2017-08-01

    Tungsten (W) has been selected as the divertor material in ITER based on its promising thermal and mechanical properties. Despite these advantages, continued investigation has revealed W to undergo extreme surface morphology evolution in response to relevant fusion operating conditions. These complications spur the need for further exploration of W and other innovative plasma facing components (PFCs) for future fusion devices. Recent literature has shown that alloying of W with other refractory metals, such as tantalum (Ta), results in the enhancement of key PFC properties including, but not limited to, ductility, hydrogen isotope retention, and helium ion (He+) radiation tolerance. In the present study, pure W and W-Ta alloys are exposed to simultaneous and sequential low energy, He+ and deuterium (D+) ion beam irradiations at high (1223 K) and low (523 K) temperatures. The goal of this study is to cultivate a complete understanding of the synergistic effects induced by dual and sequential ion irradiation on W and W-Ta alloy surface morphology evolution. For the dual ion beam experiments, W and W-Ta samples were subjected to four different He+: D+ ion ratios (100% He+, 60% D+  +  40% He+, 90% D+  +  10% He+ and 100% D+) having a total constant He+ fluence of 6  ×  1024 ion m-2. The W and W-Ta samples both exhibit the expected damaged surfaces under the 100% He+ irradiation, but as the ratio of D+/He+ ions increases there is a clear suppression of the surface morphology at high temperatures. This observation is supported by the sequential experiments, which show a similar suppression of surface morphology when W and W-Ta samples are first exposed to low energy He+ irradiation and then exposed to subsequent low energy D+ irradiation at high temperatures. Interestingly, this morphology suppression is not observed at low temperatures, implying there is a D-W interaction mechanism which is dependent on temperature that is driving the suppression of the microstructure evolution in both the pure W and W-Ta alloys. Minor irradiation tolerance enhancement in the performance of the W-Ta samples is also observed.

  1. Arsenic mobility in soils impacted by tailings at Zimapán, México

    NASA Astrophysics Data System (ADS)

    Aurora Armienta, M.; Resendiz, Isabel; Múgica, Violeta; Cruz, Olivia; Aguayo, Alejandra; Ceniceros, Nora

    2014-05-01

    The Zimapán mining zone, in Central México is one of the worldwide sites known for As contamination. For more than 20 years and until recently, As-rich groundwater, mainly due to mineralization in a limestone aquifer, was an important source of As exposure to the inhabitants. In addition, decades of ore processing have produced hazardous wastes (tailings), many of them settled in the town outskirts. Although mineralogical and chemical differences exist among the various deposits; every one has high As contents (up to several thousands mg/kg) and other toxic elements that may be released to the nearby soils. To assess As mobility in soils impacted by tailings, total and sequential fractionation determinations were performed in 120 superficial and 40 cm depth samples collected at various distances near three of the impoundments. Higher total As concentrations were measured in the dry (up to 51,534 mg/kg) with respect to the rainy season (up to 23,570 mg/kg) indicating the occurrence of As wash off by rain. Although concentrations were lower in the deep regarding the superficial samples at most sites, As contents reached several thousands mg/kg at 40 cm depth indicating also its vertical transport that may reach the shallow aquifer. Sequential extractions showed differences between soils impacted by highly oxidized (red) tailings and low oxidized (gray) deposits. Most of the As occurs in the Fe-Mn oxides fraction (up to 92%) followed by the organic matter and sulfides fraction (up to 52 %) in soils close to red tailings, while organic matter and sulfide fraction contain most of the As (up to 95%) in soil samples close to low-oxidized deposits. Arsenic proportion in the residual fraction increased with distance from oxidized tailings. Low pH values (from 2.0 to 2.5) in superficial soils revealed the influence of acid mine drainage at distances up to 40 m from the red deposit. In contrast, the lowest pH was 7.1 in soils impacted by low-oxidized deposits, reflecting the limestone environment. Arsenic airborne transport was evidenced by the presence of a total As concentration of 30,780 mg/kg in soils collected at 120 m in front of the tailings crossing a ravine. Although sequential extraction showed that most of the As is present in relatively low-mobility fractions, total As concentrations indicate that tailings impoundments constitute another source of environmental As exposure.

  2. Work–Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress

    PubMed Central

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress. PMID:29719522

  3. Simultaneous vs sequential bilateral cataract surgery for infants with congenital cataracts: Visual outcomes, adverse events, and economic costs.

    PubMed

    Dave, Hreem; Phoenix, Vidya; Becker, Edmund R; Lambert, Scott R

    2010-08-01

    To compare the incidence of adverse events and visual outcomes and to compare the economic costs of sequential vs simultaneous bilateral cataract surgery for infants with congenital cataracts. Retrospective review of simultaneous vs sequential bilateral cataract surgery for infants with congenital cataracts who underwent cataract surgery when 6 months or younger at our institution. Records were available for 10 children who underwent sequential surgery at a mean age of 49 days for the first eye and 17 children who underwent simultaneous surgery at a mean age of 68 days (P = .25). We found a similar incidence of adverse events between the 2 treatment groups. Intraoperative or postoperative complications occurred in 14 eyes. The most common postoperative complication was glaucoma. No eyes developed endophthalmitis. The mean (SD) absolute interocular difference in logMAR visual acuities between the 2 treatment groups was 0.47 (0.76) for the sequential group and 0.44 (0.40) for the simultaneous group (P = .92). Payments for the hospital, drugs, supplies, and professional services were on average 21.9% lower per patient in the simultaneous group. Simultaneous bilateral cataract surgery for infants with congenital cataracts is associated with a 21.9% reduction in medical payments and no discernible difference in the incidence of adverse events or visual outcomes. However, our small sample size limits our ability to make meaningful comparisons of the relative risks and visual benefits of the 2 procedures.

  4. Work-Family Conflict and Mental Health Among Female Employees: A Sequential Mediation Model via Negative Affect and Perceived Stress.

    PubMed

    Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao

    2018-01-01

    After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work-family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work-family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work-family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women's perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work-family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work-family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.

  5. PC_Eyewitness and the sequential superiority effect: computer-based lineup administration.

    PubMed

    MacLin, Otto H; Zimmerman, Laura A; Malpass, Roy S

    2005-06-01

    Computer technology has become an increasingly important tool for conducting eyewitness identifications. In the area of lineup identifications, computerized administration offers several advantages for researchers and law enforcement. PC_Eyewitness is designed specifically to administer lineups. To assess this new lineup technology, two studies were conducted in order to replicate the results of previous studies comparing simultaneous and sequential lineups. One hundred twenty university students participated in each experiment. Experiment 1 used traditional paper-and-pencil lineup administration methods to compare simultaneous to sequential lineups. Experiment 2 used PC_Eyewitness to administer simultaneous and sequential lineups. The results of these studies were compared to the meta-analytic results reported by N. Steblay, J. Dysart, S. Fulero, and R. C. L. Lindsay (2001). No differences were found between paper-and-pencil and PC_Eyewitness lineup administration methods. The core findings of the N. Steblay et al. (2001) meta-analysis were replicated by both administration procedures. These results show that computerized lineup administration using PC_Eyewitness is an effective means for gathering eyewitness identification data.

  6. Sequential mediating effects of provided and received social support on trait emotional intelligence and subjective happiness: A longitudinal examination in Hong Kong Chinese university students.

    PubMed

    Ye, Jiawen; Yeung, Dannii Y; Liu, Elaine S C; Rochelle, Tina L

    2018-04-03

    Past research has often focused on the effects of emotional intelligence and received social support on subjective well-being yet paid limited attention to the effects of provided social support. This study adopted a longitudinal design to examine the sequential mediating effects of provided and received social support on the relationship between trait emotional intelligence and subjective happiness. A total of 214 Hong Kong Chinese undergraduates were asked to complete two assessments with a 6-month interval in between. The results of the sequential mediation analysis indicated that the trait emotional intelligence measured in Time 1 indirectly influenced the level of subjective happiness in Time 2 through a sequential pathway of social support provided for others in Time 1 and social support received from others in Time 2. These findings highlight the importance of trait emotional intelligence and the reciprocal exchanges of social support in the subjective well-being of university students. © 2018 International Union of Psychological Science.

  7. Resistant Hypertension On Treatment (ResHypOT): sequential nephron blockade compared to dual blockade of the renin-angiotensin-aldosterone system plus bisoprolol in the treatment of resistant arterial hypertension - study protocol for a randomized controlled trial.

    PubMed

    Cestário, Elizabeth do Espirito Santo; Fernandes, Letícia Aparecida Barufi; Giollo-Júnior, Luiz Tadeu; Uyemura, Jéssica Rodrigues Roma; Matarucco, Camila Suemi Sato; Landim, Manoel Idelfonso Paz; Cosenso-Martin, Luciana Neves; Tácito, Lúcia Helena Bonalume; Moreno, Heitor; Vilela-Martin, José Fernando; Yugar-Toledo, Juan Carlos

    2018-02-12

    Resistant hypertension is characterized when the blood pressure (BP) remains above the recommended goal after taking three antihypertensive drugs with synergistic actions at their maximum recommended tolerated doses, preferably including a diuretic. Identifying the contribution of intravascular volume and serum renin in maintaining BP levels could help tailor more effective hypertension treatment, whether acting on the control of intravascular volume or sodium balance, or acting on the effects of the renin-angiotensin-aldosterone system (RAAS) on the kidney. This is a randomized, open-label, clinical trial is designed to compare sequential nephron blockade and its contribution to the intravascular volume component with dual blockade of the RAAS plus bisoprolol and the importance of serum renin in maintaining BP levels. The trial has two arms: sequential nephron blockade versus dual blockade of the RAAS (with an angiotensin converting enzyme (ACE) inhibitor plus a beta-blocker) both added-on to a thiazide diuretic, a calcium-channel blocker and an angiotensin receptor-1 blocker (ARB). Sequential nephron blockade consists in a progressive increase in sodium depletion using a thiazide diuretic, an aldosterone-receptor blocker, furosemide and, finally, amiloride. On the other hand, the dual blockade of the RAAS consists of the progressive addition of an ACE inhibitor until the maximum dose and then the administration of a beta-blocker until the maximum dose. The primary outcomes will be reductions in the systolic BP, diastolic BP, mean BP and pulse pressure (PP) after 20 weeks of treatment. The secondary outcomes will evaluate treatment safety and tolerability, biochemical changes, evaluation of renal function and recognition of hypotension (ambulatory BP monitoring (ABPM)). The sample size was calculated assuming an alpha error of 5% to reject the null hypothesis with a statistical power of 80% giving a total of 40 individuals per group. In recent years, the cost of resistant hypertension (RH) treatment has increased. Thus, identifying the contribution of intravascular volume and serum renin in maintaining BP levels could help tailor more effective hypertension treatment, whether by acting on the control of intravascular volume or sodium balance, or by acting on the effects of the RAAS on the kidney. Sequential Nephron Blockade vs. Dual Blockade Renin-angiotensin System + Bisoprolol in Resistant Arterial Hypertension (ResHypOT). ClinicalTrials.gov, ID: NCT02832973 . Registered on 14 July 2016. First received: 12 June 2016. Last updated: 18 July 2016.

  8. Interim results of quality-control sampling of surface water for the Upper Colorado River National Water-Quality Assessment Study Unit, water years 1995-96

    USGS Publications Warehouse

    Spahr, N.E.; Boulger, R.W.

    1997-01-01

    Quality-control samples provide part of the information needed to estimate the bias and variability that result from sample collection, processing, and analysis. Quality-control samples of surface water collected for the Upper Colorado River National Water-Quality Assessment study unit for water years 1995?96 are presented and analyzed in this report. The types of quality-control samples collected include pre-processing split replicates, concurrent replicates, sequential replicates, post-processing split replicates, and field blanks. Analysis of the pre-processing split replicates, concurrent replicates, sequential replicates, and post-processing split replicates is based on differences between analytical results of the environmental samples and analytical results of the quality-control samples. Results of these comparisons indicate that variability introduced by sample collection, processing, and handling is low and will not affect interpretation of the environmental data. The differences for most water-quality constituents is on the order of plus or minus 1 or 2 lowest rounding units. A lowest rounding unit is equivalent to the magnitude of the least significant figure reported for analytical results. The use of lowest rounding units avoids some of the difficulty in comparing differences between pairs of samples when concentrations span orders of magnitude and provides a measure of the practical significance of the effect of variability. Analysis of field-blank quality-control samples indicates that with the exception of chloride and silica, no systematic contamination of samples is apparent. Chloride contamination probably was the result of incomplete rinsing of the dilute cleaning solution from the outlet ports of the decaport sample splitter. Silica contamination seems to have been introduced by the blank water. Sampling and processing procedures for water year 1997 have been modified as a result of these analyses.

  9. Automation of a flocculation test for syphilis on Groupamatic equipment.

    PubMed Central

    Garretta, M; Paris-Hamelin, A; Gener, J; Muller, A; Matte, C; Vaisman, A

    1975-01-01

    A flocculation reaction employing a cardiolipid antigen was used for syphilis screening on Groupamatic equipment in parallel with conventional screening reactions: Kolmer CF, RPCF, Kahn, Kline, and RPR. The positive samples were confirmed by FTA-200, FTA-ABS, TPI, and in some cases by TPHA. There were 5,212 known samples which had already been tested by all methods and of which 1,648 were positive, and 58,636 screened samples including 65 positives. Half of the samples in the first series were taken without anticoagulant; the remainder were collected in potassium EDTA. The percentage of false positives with the Groupamatic was about 1-4 per cent. The percentage of false negatives among positve (greater than or equal+) samples varied from 0-18 to 1-3 per cent.; on the other hand the sensitivity was less good for samples giving doubtful and/or dissociated reactions in conventional screening reactions. The specificity and sensitivity of this technique are acceptable for a blood transfusion centre. The reproducibility is excellent and the automatic reading of results accurate. Additional advantages are rapidity (340 samples processed per hour); simultaneous performance of eleven other immunohaematological reactions; no contamination between samples; automatic reading, interpretation, and print-out of results; and saving of time because samples are not filed sequentially and are automatically identified when the results are obtained. Although the importance of syphilis in blood transfusion seems small, estimates of the risk are difficult and further investigations are planned. Images PMID:1098731

  10. Collection methods and quality assessment for Esche-richia coli, water quality, and microbial source tracking data within Tumacácori National Historical Park and the upper Santa Cruz River, Arizona, 2015-16

    USGS Publications Warehouse

    Paretti, Nicholas; Coes, Alissa L.; Kephart, Christopher M.; Mayo, Justine

    2018-03-05

    Tumacácori National Historical Park protects the culturally important Mission, San José de Tumacácori, while also managing a portion of the ecologically diverse riparian corridor of the Santa Cruz River. This report describes the methods and quality assurance procedures used in the collection of water samples for the analysis of Escherichia coli (E. coli), microbial source tracking markers, suspended sediment, water-quality parameters, turbidity, and the data collection for discharge and stage; the process for data review and approval is also described. Finally, this report provides a quantitative assessment of the quality of the E. coli, microbial source tracking, and suspended sediment data.The data-quality assessment revealed that bias attributed to field and laboratory contamination was minimal, with E. coli detections in only 3 out of 33 field blank samples analyzed. Concentrations in the field blanks were several orders of magnitude lower than environmental concentrations. The microbial source tracking (MST) field blank was below the detection limit for all MST markers analyzed. Laboratory blanks for E. coli at the USGS Arizona Water Science Center and laboratory blanks for MST markers at the USGS Ohio Water Microbiology Laboratory were all below the detection limit. Irreplicate data for E. coli and suspended sediment indicated that bias was not introduced to the data by combining samples collected using discrete sampling methods with samples collected using automatic sampling methods.The split and sequential E. coli replicate data showed consistent analytical variability and a single equation was developed to explain the variability of E. coli concentrations. An additional analysis of analytical variability for E. coli indicated analytical variability around 18 percent relative standard deviation and no trend was observed in the concentration during the processing and analysis of multiple split-replicates. Two replicate samples were collected for MST and individual markers were compared for a base flow and flood sample. For the markers found in common between the two types of samples, the relative standard deviation for the base flow sample was more than 3 times greater than the markers in the flood sample. Sequential suspended sediment replicates had a relative standard deviation of about 1.3 percent, indicating that environmental and analytical variability was minimal.A holding time review and laboratory study analysis supported the extended holding times required for this investigation. Most concentrations for flood and base-flow samples were within the theoretical variability specified in the most probable number approach suggesting that extended hold times did not overly influence the final concentrations reported.

  11. Proline catalyzed sequential α-aminooxylation or -amination/reductive cyclization of o-nitrohydrocinnamaldehydes: a high yield synthesis of chiral 3-substituted tetrahydroquinolines.

    PubMed

    Rawat, Varun; Kumar, B Senthil; Sudalai, Arumugam

    2013-06-14

    A new sequential organocatalytic method for the synthesis of chiral 3-substituted (X = OH, NH2) tetrahydroquinoline derivatives (THQs) [ee up to 99%, yield up to 87%] based on α-aminooxylation or -amination followed by reductive cyclization of o-nitrohydrocinnamaldehydes has been described. This methodology has been efficiently demonstrated in the synthesis of two important bioactive molecules namely (-)-sumanirole (96% ee) and 1-[(S)-3-(dimethylamino)-3,4-dihydro-6,7-dimethoxy-quinolin-1(2H)-yl]propanone (92% ee).

  12. Passive Baited Sequential Fly Trap

    USDA-ARS?s Scientific Manuscript database

    Sampling fly populations associated with human populations is needed to understand diel behavior and to monitor population densities before and after control operations. Population control measures are dependent on the results of monitoring efforts as they may provide insight into the fly behavior ...

  13. Sequential recovery of macromolecular components of the nucleolus.

    PubMed

    Bai, Baoyan; Laiho, Marikki

    2015-01-01

    The nucleolus is involved in a number of cellular processes of importance to cell physiology and pathology, including cell stress responses and malignancies. Studies of macromolecular composition of the nucleolus depend critically on the efficient extraction and accurate quantification of all macromolecular components (e.g., DNA, RNA, and protein). We have developed a TRIzol-based method that efficiently and simultaneously isolates these three macromolecular constituents from the same sample of purified nucleoli. The recovered and solubilized protein can be accurately quantified by the bicinchoninic acid assay and assessed by polyacrylamide gel electrophoresis or by mass spectrometry. We have successfully applied this approach to extract and quantify the responses of all three macromolecular components in nucleoli after drug treatments of HeLa cells, and conducted RNA-Seq analysis of the nucleolar RNA.

  14. Three methods to monitor utilization of healthcare services by the poor

    PubMed Central

    Bhuiya, Abbas; Hanifi, SMA; Urni, Farhana; Mahmood, Shehrin Shaila

    2009-01-01

    Background Achieving equity by way of improving the condition of the economically poor or otherwise disadvantaged is among the core goals of contemporary development paradigm. This places importance on monitoring outcome indicators among the poor. National surveys allow disaggregation of outcomes by socioeconomic status at national level and do not have statistical adequacy to provide estimates for lower level administrative units. This limits the utility of these data for programme managers to know how well particular services are reaching the poor at the lowest level. Managers are thus left without a tool for monitoring results for the poor at lower levels. This paper demonstrates that with some extra efforts community and facility based data at the lower level can be used to monitor utilization of healthcare services by the poor. Methods Data used in this paper came from two sources- Chakaria Health and Demographic Surveillance System (HDSS) of ICDDR,B and from a special study conducted during 2006 among patients attending the public and private health facilities in Chakaria, Bangladesh. The outcome variables included use of skilled attendants for delivery and use of facilities. Rate-ratio, rate-difference, concentration index, benefit incidence ratio, sequential sampling, and Lot Quality Assurance Sampling were used to assess how pro-poor is the use of skilled attendants for delivery and healthcare facilities. Findings Poor are using skilled attendants for delivery far less than the better offs. Government health service facilities are used more than the private facilities by the poor. Benefit incidence analysis and sequential sampling techniques could assess the situation realistically which can be used for monitoring utilization of services by poor. The visual display of the findings makes both these methods attractive. LQAS, on the other hand, requires small fixed sample and always enables decision making. Conclusion With some extra efforts monitoring of the utilization of healthcare services by the poor at the facilities can be done reliably. If monitored, the findings can guide the programme and facility managers to act in a timely fashion to improve the effectiveness of the programme in reaching the poor. PMID:19650938

  15. Sequential Voluntary Cough and Aspiration or Aspiration Risk in Parkinson’s Disease

    PubMed Central

    Hegland, Karen Wheeler; Okun, Michael S.; Troche, Michelle S.

    2015-01-01

    Background Disordered swallowing, or dysphagia, is almost always present to some degree in people with Parkinson’s disease (PD), either causing aspiration or greatly increasing the risk for aspiration during swallowing. This likely contributes to aspiration pneumonia, a leading cause of death in this patient population. Effective airway protection is dependent upon multiple behaviors, including cough and swallowing. Single voluntary cough function is disordered in people with PD and dysphagia. However, the appropriate response to aspirate material is more than one cough, or sequential cough. The goal of this study was to examine voluntary sequential coughing in people with PD, with and without dysphagia. Methods Forty adults diagnosed with idiopathic PD produced two trials of sequential voluntary cough. The cough airflows were obtained using pneumotachograph and facemask and subsequently digitized and recorded. All participants received a modified barium swallow study as part of their clinical care, and the worst penetration–aspiration score observed was used to determine whether the patient had dysphagia. Results There were significant differences in the compression phase duration, peak expiratory flow rates, and amount of air expired of the sequential cough produced by participants with and without dysphagia. Conclusions The presence of dysphagia in people with PD is associated with disordered cough function. Sequential cough, which is important in removing aspirate material from large- and smaller-diameter airways, is also impaired in people with PD and dysphagia compared with those without dysphagia. There may be common neuroanatomical substrates for cough and swallowing impairment in PD leading to the co-occurrence of these dysfunctions. PMID:24792231

  16. Research on parallel algorithm for sequential pattern mining

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  17. [Sequential monitoring of renal transplant with aspiration cytology].

    PubMed

    Manfro, R C; Gonçalves, L F; de Moura, L A

    1998-01-01

    To evaluate the utility of kidney aspiration cytology in the sequential monitorization of acute rejection in renal transplant patients. Thirty patients were submitted to 376 aspirations. The clinical diagnoses were independently established. The representativity of the samples reached 82.7%. The total corrected increment index and the number of immunoactivated cells were higher during acute rejection as compared to normal allograft function, acute tubular necrosis, and cyclosporine nephrotoxicity. The parameters to the diagnosis of acute rejection were sensitivity: 71.8%, specificity: 87.3%, positive predictive value: 50.9%, negative predictive value: 94.9% and accuracy 84.9%. The false positive results were mainly related to cytomegalovirus infection or to the administration of OKT3. In 10 out of 11 false negative results incipient immunoactivation was present alerting to the possibility of acute rejection. Kidney aspiration cytology is a useful tool for the sequential monitorization of acute rejection in renal transplant patients. The best results are reached when the results of aspiration cytology are analyzed with the clinical data.

  18. Speckle pattern sequential extraction metric for estimating the focus spot size on a remote diffuse target.

    PubMed

    Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing

    2017-11-10

    The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.

  19. Combined techniques for characterising pasta structure reveals how the gluten network slows enzymic digestion rate.

    PubMed

    Zou, Wei; Sissons, Mike; Gidley, Michael J; Gilbert, Robert G; Warren, Frederick J

    2015-12-01

    The aim of the present study is to characterise the influence of gluten structure on the kinetics of starch hydrolysis in pasta. Spaghetti and powdered pasta were prepared from three different cultivars of durum semolina, and starch was also purified from each cultivar. Digestion kinetic parameters were obtained through logarithm-of-slope analysis, allowing identification of sequential digestion steps. Purified starch and semolina were digested following a single first-order rate constant, while pasta and powdered pasta followed two sequential first-order rate constants. Rate coefficients were altered by pepsin hydrolysis. Confocal microscopy revealed that, following cooking, starch granules were completely swollen for starch, semolina and pasta powder samples. In pasta, they were completely swollen in the external regions, partially swollen in the intermediate region and almost intact in the pasta strand centre. Gluten entrapment accounts for sequential kinetic steps in starch digestion of pasta; the compact microstructure of pasta also reduces digestion rates. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Partitioning of radionuclides and trace elements in phosphogypsum and its source materials based on sequential extraction methods.

    PubMed

    Santos, A J G; Mazzilli, B P; Fávaro, D I T; Silva, P S C

    2006-01-01

    Phosphogypsum is a waste produced by the phosphate fertilizer industry. Although phosphogypsum is mainly calcium sulphate dihydrate, it contains elevated levels of impurities, which originate from the source phosphate rock used in the phosphoric acid production. Among these impurities, radionuclides from 238U and 232Th decay series are of most concern due to their radiotoxicity. Other elements, such as rare earth elements (REE) and Ba are also enriched in the phosphogypsum. The bioavailability of radionuclides (226Ra, 210Pb and 232Th), rare earth elements and Ba to the surrounding aquatic system was evaluated by the application of sequential leaching of the phosphogypsum samples from the Brazilian phosphoric acid producers. The sequential extraction results show that most of the radium and lead are located in the "iron oxide" (non-CaSO4) fraction, and that only 13-18% of these radionuclides are distributed in the most labile fraction. Th, REE and Ba were found predominantly in the residual phase, which corresponds to a small fraction of the phosphate rock or monazite that did not react and to insoluble compounds such as sulphates, phosphates and silicates. It can be concluded that although all these elements are enriched in the phosphogypsum samples they are not associated with CaSO4 itself and therefore do not represent a threat to the surrounding aquatic environment.

  1. The efficacy of two electrodes radiofrequency technique: comparison study using a cadaveric interspinous ligament and temperature measurement using egg white.

    PubMed

    Lee, Chang-Hyung; Derby, Richard; Choi, Hyun-Seok; Lee, Sang-Heon; Kim, Se Hoon; Kang, Yoon Kyu

    2010-01-01

    One technique in radiofrequency neurotomies uses 2 electrodes that are simultaneously placed to lie parallel to one another. Comparing lesions on cadaveric interspinous ligament tissue and measuring the temperature change in egg white allows us to accurately measure quantitatively the area of the lesion. Fresh cadaver spinal tissue and egg white tissue were used. A series of samples were prepared with the electrodes placed 1 to 7 mm apart. Using radiofrequency, the needle electrodes were heated in sequential or simultaneous order and the distance of the escaped lesion area and temperature were measured. Samples of cadaver interspinous ligament showed sequential heating of the needles limits the placement of the needle electrodes up to 2 mm apart from each other and up to 4 mm apart when heated simultaneously. The temperature at the escaped lesion area decreased according to the distance for egg white. There was a significant difference in temperature at the escaped lesion area up to 6 mm apart and the temperature was above 50 degrees celsius up to 5 mm in simultaneous lesion and 3 mm in the sequential lesion. The limitations of this study include cadaveric experimentation and use of intraspinous ligament rather than medial branch of the dorsal ramus which is difficult to identify. Heating the 2 electrodes simultaneously appears to coagulate a wider area and potentially produce better results in less time.

  2. Particle Filter-Based Recursive Data Fusion With Sensor Indexing for Large Core Neutron Flux Estimation

    NASA Astrophysics Data System (ADS)

    Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol

    2017-06-01

    We introduce a sequential importance sampling particle filter (PF)-based multisensor multivariate nonlinear estimator for estimating the in-core neutron flux distribution for pressurized heavy water reactor core. Many critical applications such as reactor protection and control rely upon neutron flux information, and thus their reliability is of utmost importance. The point kinetic model based on neutron transport conveniently explains the dynamics of nuclear reactor. The neutron flux in the large core loosely coupled reactor is sensed by multiple sensors measuring point fluxes located at various locations inside the reactor core. The flux values are coupled to each other through diffusion equation. The coupling facilitates redundancy in the information. It is shown that multiple independent data about the localized flux can be fused together to enhance the estimation accuracy to a great extent. We also propose the sensor anomaly handling feature in multisensor PF to maintain the estimation process even when the sensor is faulty or generates data anomaly.

  3. Fast transient digitizer

    DOEpatents

    Villa, Francesco

    1982-01-01

    Method and apparatus for sequentially scanning a plurality of target elements with an electron scanning beam modulated in accordance with variations in a high-frequency analog signal to provide discrete analog signal samples representative of successive portions of the analog signal; coupling the discrete analog signal samples from each of the target elements to a different one of a plurality of high speed storage devices; converting the discrete analog signal samples to equivalent digital signals; and storing the digital signals in a digital memory unit for subsequent measurement or display.

  4. Organometallic exposure dependence on organic–inorganic hybrid material formation in polyethylene terephthalate and polyamide 6 polymer fibers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akyildiz, Halil I.; Jur, Jesse S., E-mail: jsjur@ncsu.edu

    2015-03-15

    The effect of exposure conditions and surface area on hybrid material formation during sequential vapor infiltrations of trimethylaluminum (TMA) into polyamide 6 (PA6) and polyethylene terephthalate (PET) fibers is investigated. Mass gain of the fabric samples after infiltration was examined to elucidate the reaction extent with increasing number of sequential TMA single exposures, defined as the times for a TMA dose and a hold period. An interdependent relationship between dosing time and holding time on the hybrid material formation is observed for TMA exposure PET, exhibited as a linear trend between the mass gain and total exposure (dose time ×more » hold time × number of sequential exposures). Deviation from this linear relationship is only observed under very long dose or hold times. In comparison, amount of hybrid material formed during sequential exposures to PA6 fibers is found to be highly dependent on amount of TMA dosed. Increasing the surface area of the fiber by altering its cross-sectional dimension is shown to have little on the reaction behavior but does allow for improved diffusion of the TMA into the fiber. This work allows for the projection of exposure parameters necessary for future high-throughput hybrid modifications to polymer materials.« less

  5. Automated Discovery and Modeling of Sequential Patterns Preceding Events of Interest

    NASA Technical Reports Server (NTRS)

    Rohloff, Kurt

    2010-01-01

    The integration of emerging data manipulation technologies has enabled a paradigm shift in practitioners' abilities to understand and anticipate events of interest in complex systems. Example events of interest include outbreaks of socio-political violence in nation-states. Rather than relying on human-centric modeling efforts that are limited by the availability of SMEs, automated data processing technologies has enabled the development of innovative automated complex system modeling and predictive analysis technologies. We introduce one such emerging modeling technology - the sequential pattern methodology. We have applied the sequential pattern methodology to automatically identify patterns of observed behavior that precede outbreaks of socio-political violence such as riots, rebellions and coups in nation-states. The sequential pattern methodology is a groundbreaking approach to automated complex system model discovery because it generates easily interpretable patterns based on direct observations of sampled factor data for a deeper understanding of societal behaviors that is tolerant of observation noise and missing data. The discovered patterns are simple to interpret and mimic human's identifications of observed trends in temporal data. Discovered patterns also provide an automated forecasting ability: we discuss an example of using discovered patterns coupled with a rich data environment to forecast various types of socio-political violence in nation-states.

  6. Comparison of Sequential Drug Release in Vitro and in Vivo

    PubMed Central

    Sundararaj, Sharath C.; Al-Sabbagh, Mohanad; Rabek, Cheryl L.; Dziubla, Thomas D.; Thomas, Mark V.; Puleo, David A.

    2015-01-01

    Development of drug delivery devices typically involves characterizing in vitro release performance with the inherent assumption that this will closely approximate in vivo performance. Yet, as delivery devices become more complex, for instance with a sequential drug release pattern, it is important to confirm that in vivo properties correlate with the expected “programming” achieved in vitro. In this work, a systematic comparison between in vitro and in vivo biomaterial erosion and sequential release was performed for a multilayered association polymer system comprising cellulose acetate phthalate and Pluronic F-127. After assessing the materials during incubation in phosphate-buffered saline, devices were implanted supracalvarially in rats. Devices with two different doses and with different erosion rates were harvested at increasing times post-implantation, and the in vivo thickness loss, mass loss, and the drug release profiles were compared with their in vitro counterparts. The sequential release of four different drugs observed in vitro was successfully translated to in vivo conditions. Results suggest, however, that the total erosion time of the devices was longer and release rates of the four drugs were different, with drugs initially released more quickly and then more slowly in vivo. Whereas many comparative studies of in vitro and in vivo drug release from biodegradable polymers involved a single drug, the present research demonstrated that sequential release of four drugs can be maintained following implantation. PMID:26111338

  7. Spatial-simultaneous and spatial-sequential working memory in individuals with Down syndrome: the effect of configuration.

    PubMed

    Carretti, Barbara; Lanfranchi, Silvia; Mammarella, Irene C

    2013-01-01

    Earlier research showed that visuospatial working memory (VSWM) is better preserved in Down syndrome (DS) than verbal WM. Some differences emerged, however, when VSWM performance was broken down into its various components, and more recent studies revealed that the spatial-simultaneous component of VSWM is more impaired than the spatial-sequential one. The difficulty of managing more than one item at a time is also evident when the information to be recalled is structured. To further analyze this issue, we investigated the advantage of material being structured in spatial-simultaneous and spatial-sequential tasks by comparing the performance of a group of individuals with DS and a group of typically-developing children matched for mental age. Both groups were presented with VSWM tasks in which both the presentation format (simultaneous vs. sequential) and the type of configuration (pattern vs. random) were manipulated. Findings indicated that individuals with DS took less advantage of the pattern configuration in the spatial-simultaneous task than TD children; in contrast, the two groups' performance did not differ in the pattern configuration of the spatial-sequential task. Taken together, these results confirmed difficulties relating to the spatial-simultaneous component of VSWM in individuals with DS, supporting the importance of distinguishing between different components within this system. The findings are discussed in terms of factors influencing this specific deficit. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. 40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...

  9. 40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...

  10. 40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM−2.5.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...

  11. 40 CFR 53.1 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... substantial deviations from the design specifications of the sampler specified for reference methods in... general requirements as an ISO 9001-registered facility for the design and manufacture of designated... capable of automatically collecting a series of sequential samples. NO means nitrogen oxide. NO 2 means...

  12. Sleep to the beat: A nap favours consolidation of timing.

    PubMed

    Verweij, Ilse M; Onuki, Yoshiyuki; Van Someren, Eus J W; Van der Werf, Ysbrand D

    2016-06-01

    Growing evidence suggests that sleep is important for procedural learning, but few studies have investigated the effect of sleep on the temporal aspects of motor skill learning. We assessed the effect of a 90-min day-time nap on learning a motor timing task, using 2 adaptations of a serial interception sequence learning (SISL) task. Forty-two right-handed participants performed the task before and after a 90-min period of sleep or wake. Electroencephalography (EEG) was recorded throughout. The motor task consisted of a sequential spatial pattern and was performed according to 2 different timing conditions, that is, either following a sequential or a random temporal pattern. The increase in accuracy was compared between groups using a mixed linear regression model. Within the sleep group, performance improvement was modeled based on sleep characteristics, including spindle- and slow-wave density. The sleep group, but not the wake group, showed improvement in the random temporal, but especially and significantly more strongly in the sequential temporal condition. None of the sleep characteristics predicted improvement on either general of the timing conditions. In conclusion, a daytime nap improves performance on a timing task. We show that performance on the task with a sequential timing sequence benefits more from sleep than motor timing. More important, the temporal sequence did not benefit initial learning, because differences arose only after an offline period and specifically when this period contained sleep. Sleep appears to aid in the extraction of regularities for optimal subsequent performance. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  13. Electrochemical immunoassay for vitellogenin based on sequential injection using antigen-immobilized magnetic microbeads.

    PubMed

    Hirakawa, Koji; Katayama, Masaaki; Soh, Nobuaki; Nakano, Koji; Imato, Toshihiko

    2006-01-01

    A rapid and sensitive immunoassay for the determination of vitellogenin (Vg) is described. The method involves a sequential injection analysis (SIA) system equipped with an amperometric detector and a neodymium magnet. Magnetic beads, onto which an antigen (Vg) was immobilized, were used as a solid support in an immunoassay. The introduction, trapping and release of magnetic beads in an immunoreaction cell were controlled by means of the neodymium magnet and by adjusting the flow of the carrier solution. The immunoassay was based on an indirect competitive immunoreaction of an alkaline phosphatase (ALP) labeled anti-Vg monoclonal antibody between the fraction of Vg immobilized on the magnetic beads and Vg in the sample solution. The immobilization of Vg on the beads involved coupling an amino group moiety of Vg with the magnetic beads after activation of a carboxylate moiety on the surface of magnetic beads that had been coated with a polylactate film. The Vg-immobilized magnetic beads were introduced and trapped in the immunoreaction cell equipped with the neodymium magnet; a Vg sample solution containing an ALP labeled anti-Vg antibody at a constant concentration and a p-aminophenyl phosphate (PAPP) solution were sequentially introduced into the immunoreaction cell. The product of the enzyme reaction of PAPP with ALP on the antibody, paminophenol, was transported to an amperometric detector, the applied voltage of which was set at +0.2 V vs. an Ag/AgCl reference electrode. A sigmoid calibration curve was obtained when the logarithm of the concentration of Vg was plotted against the peak current of the amperometric detector using various concentrations of standard Vg sample solutions (0-500 ppb). The time required for the analysis is less than 15 min.

  14. Environmental Risk Implications of Metals in Sludges from Waste Water Treatment Plants: The Discovery of Vast Stores of Metal-Containing Nanoparticles.

    PubMed

    Tou, Feiyun; Yang, Yi; Feng, Jingnan; Niu, Zuoshun; Pan, Hui; Qin, Yukun; Guo, Xingpan; Meng, Xiangzhou; Liu, Min; Hochella, Michael F

    2017-05-02

    Nanoparticle (NP) assessment in sludge materials, although of growing importance in eco- and biotoxicity studies, is commonly overlooked and, at best, understudied. In the present study, sewage sludge samples from across the mega-city of Shanghai, China were investigated for the first time using a sequential extraction method coupled with single particle inductively coupled plasma mass spectrometry (SP-ICP-MS) to quantify the abundance of metal-containing NPs in the extraction fractions and transmission electron microscopy to specifically identify the nanophases present. In general, most sludges observed showed high concentrations of Cr, Cu, Cd, Ni, Zn, and Pb, exceeding the maximum permitted values in the national application standard of acid soil in China. NPs in these sludges contribute little to the volume and mass but account for about half of the total particle number. Based on electron microscopy techniques, various NPs were further identified, including Ti-, Fe-, Zn-, Sn-, and Pb-containing NPs. All NPs, ignored by traditional metal risk evaluation methods, were observed at a concentration of 10 7 -10 11 particles/g within the bioavailable fraction of metals. These results indicate the underestimate or misestimation in evaluating the environmental risks of metals based on traditional sequential extraction methods. A new approach for the environmental risk assessment of metals, including NPs, is urgently needed.

  15. Distribution and mode of occurrence of radionuclides in phosphogypsum derived from Aqaba and Eshidiya Fertilizer Industry, South Jordan

    USGS Publications Warehouse

    Al-Hwaiti, M. S.; Zielinski, R.A.; Bundham, J.R.; Ranville, J.F.; Ross, P.E.

    2010-01-01

    Phosphogypsum (PG) is a by-product of the chemical reaction called the "wet process" whereby sulphuric acid reacts with phosphate rock (PR) to produce phosphoric acid, needed for fertilizer production. Through the wet process, some impurities naturally present in the PR become incorporated in PG, including U decay-series radionuclides, are the main important concern which could have an effect on the surrounding environment and prevent its safe utilization. In order to determine the distribution and bioavailability of radionuclides to the surrounding environment, we used a sequential leaching of PG samples from Aqaba and Eshidiya fertilizer industry. The results showed that the percentages of 226Ra and 210Pb in PG are over those in the corresponding phosphate rocks (PG/PR), where 85% of the 226Ra and 85% of the 210Pb fractionate to PG. The sequential extraction results exhibited that most of 226Ra and 210Pb are bound in the residual phase (non-CaSO4) fraction ranging from 45-65% and 55%-75%, respectively, whereas only 10%-15% and 10%-20% respectively of these radionuclides are distributed in the most labile fraction. The results obtained from this study showed that radionuclides are not incorporated with gypsum itself and may not form a threat to the surrounding environment. ?? 2010 Science Press, Institute of Geochemistry, CAS and Springer Berlin Heidelberg.

  16. The Role of E-Mail Communications in Determining Response Rates and Mode of Participation in a Mixed-Mode Design

    ERIC Educational Resources Information Center

    Cernat, Alexandru; Lynn, Peter

    2018-01-01

    This article is concerned with the extent to which the propensity to participate in a web face-to-face sequential mixed-mode survey is influenced by the ability to communicate with sample members by e-mail in addition to mail. Researchers may be able to collect e-mail addresses for sample members and to use them subsequently to send survey…

  17. Array-based photoacoustic spectroscopy

    DOEpatents

    Autrey, S. Thomas; Posakony, Gerald J.; Chen, Yu

    2005-03-22

    Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. A photoacoustic spectroscopy sample array including a body having at least three recesses or affinity masses connected thereto is used in conjunction with a photoacoustic spectroscopy system. At least one acoustic detector is positioned near the recesses or affinity masses for detection of acoustic waves emitted from species of interest within the recesses or affinity masses.

  18. Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies.

    PubMed

    Gagnebin, Yoric; Tonoli, David; Lescuyer, Pierre; Ponte, Belen; de Seigneux, Sophie; Martin, Pierre-Yves; Schappler, Julie; Boccard, Julien; Rudaz, Serge

    2017-02-22

    Among the various biological matrices used in metabolomics, urine is a biofluid of major interest because of its non-invasive collection and its availability in large quantities. However, significant sources of variability in urine metabolomics based on UHPLC-MS are related to the analytical drift and variation of the sample concentration, thus requiring normalization. A sequential normalization strategy was developed to remove these detrimental effects, including: (i) pre-acquisition sample normalization by individual dilution factors to narrow the concentration range and to standardize the analytical conditions, (ii) post-acquisition data normalization by quality control-based robust LOESS signal correction (QC-RLSC) to correct for potential analytical drift, and (iii) post-acquisition data normalization by MS total useful signal (MSTUS) or probabilistic quotient normalization (PQN) to prevent the impact of concentration variability. This generic strategy was performed with urine samples from healthy individuals and was further implemented in the context of a clinical study to detect alterations in urine metabolomic profiles due to kidney failure. In the case of kidney failure, the relation between creatinine/osmolality and the sample concentration is modified, and relying only on these measurements for normalization could be highly detrimental. The sequential normalization strategy was demonstrated to significantly improve patient stratification by decreasing the unwanted variability and thus enhancing data quality. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Sequential liquid biopsies reveal dynamic alterations of EGFR driver mutations and indicate EGFR amplification as a new mechanism of resistance to osimertinib in NSCLC.

    PubMed

    Knebel, Franciele H; Bettoni, Fabiana; Shimada, Andrea K; Cruz, Manoel; Alessi, João Victor; Negrão, Marcelo V; Reis, Luiz Fernando L; Katz, Artur; Camargo, Anamaria A

    2017-06-01

    Osimertinib is an EGFR-T790M-specific TKI, which has demonstrated impressive response rates in NSCLC, after failure to first-line anti-EGFR TKIs. However, acquired resistance to osimertinib is also observed and the molecular mechanisms of resistance are not yet fully understood. Monitoring and managing NSCLC patients who progressed on osimertinib is, therefore, emerging as an important clinical challenge. Sequential liquid biopsies were used to monitor a patient with EGFR-exon19del positive NSCLC, who received erlotinib and progressed through the acquisition of the EGFR-T790M mutation. Erlotinib was discontinued and osimertinib was initiated. Blood samples were collected at erlotinib progression and during osimertinib treatment for the detection of the activating (EGFR-exon19del) and resistance mutations (EGFR-T790M, EGFR-C797S, BRAF-V600E, METamp and ERBB2amp) in the plasma DNA using digital droplet PCR. Plasma levels of the activating EGFR-exon19del accurately paralleled the clinical and radiological progression of disease and allowed early detection of AR to osimertinib. Resistance to osimertinib coincided with the emergence of a small tumor cell subpopulation carrying the known EGFR-C797S resistance mutation and an additional subpopulation carrying amplified copies of EGFR-exon19del. Given the existence of multiple AR mechanisms, quantification of the original EGFR activation mutation, instead of the resistance mutations, can be efficiently used to monitor response to osimertinib, allowing early detection of AR. Absolute quantification of both activation and resistance mutations can provide important information on tumor clonal evolution upon progression to osimertinib. Selective amplification of the EGFR-exon19del allele may represent a novel resistance mechanism to osimertinib. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Sequential (step-by-step) detection, identification and quantitation of extra virgin olive oil adulteration by chemometric treatment of chromatographic profiles.

    PubMed

    Capote, F Priego; Jiménez, J Ruiz; de Castro, M D Luque

    2007-08-01

    An analytical method for the sequential detection, identification and quantitation of extra virgin olive oil adulteration with four edible vegetable oils--sunflower, corn, peanut and coconut oils--is proposed. The only data required for this method are the results obtained from an analysis of the lipid fraction by gas chromatography-mass spectrometry. A total number of 566 samples (pure oils and samples of adulterated olive oil) were used to develop the chemometric models, which were designed to accomplish, step-by-step, the three aims of the method: to detect whether an olive oil sample is adulterated, to identify the type of adulterant used in the fraud, and to determine how much aldulterant is in the sample. Qualitative analysis was carried out via two chemometric approaches--soft independent modelling of class analogy (SIMCA) and K nearest neighbours (KNN)--both approaches exhibited prediction abilities that were always higher than 91% for adulterant detection and 88% for type of adulterant identification. Quantitative analysis was based on partial least squares regression (PLSR), which yielded R2 values of >0.90 for calibration and validation sets and thus made it possible to determine adulteration with excellent precision according to the Shenk criteria.

  1. Carry-over of thermophilic Campylobacter spp. between sequential and adjacent poultry flocks.

    PubMed

    Alter, Thomas; Weber, Rita Margarete; Hamedy, Ahmad; Glünder, Gerhard

    2011-01-10

    Nineteen flocks of four poultry species were monitored at a veterinary field station to investigate the distribution and spread of Campylobacter genotypes between sequential and adjacent flocks. Caecal and liver samples were obtained at frequent intervals from birds of all flocks and examined for Campylobacter. Amplified fragment length polymorphism (AFLP) analysis was performed to genotype Campylobacter isolates. Of the 1643 caecal and liver samples investigated, 452 (27.5%) caecal samples and 11 (0.7%) liver samples contained Campylobacter. Of the caecal isolates 76.3% were identified as Campylobacter jejuni and 23.7% were identified as Campylobacter coli. Poultry flocks were largely colonized by more than one AFLP type and an intense exchange of Campylobacter genotypes between different poultry flocks occurred. These findings indicate that multiple genotypes can constitute the Campylobacter population within single poultry flocks, hinting to different sources of exposure and/or genetic drifts within the Campylobacter population. Nevertheless, in most flocks single Campylobacter genotypes predominated. Some strains superseded others resulting in colonization by successive Campylobacter genotypes during the observation period. In conclusion, the data demonstrate that the large genetic diversity of Campylobacter must be considered in epidemiological evaluations and microbial risk assessments of Campylobacter in poultry. Copyright © 2010 Elsevier B.V. All rights reserved.

  2. Evaluation of Lead Release in a Simulated Lead-Free Premise Plumbing System Using a Sequential Sampling Approach

    PubMed Central

    Ng, Ding-Quan; Lin, Yi-Pin

    2016-01-01

    In this pilot study, a modified sampling protocol was evaluated for the detection of lead contamination and locating the source of lead release in a simulated premise plumbing system with one-, three- and seven-day stagnation for a total period of 475 days. Copper pipes, stainless steel taps and brass fittings were used to assemble the “lead-free” system. Sequential sampling using 100 mL was used to detect lead contamination while that using 50 mL was used to locate the lead source. Elevated lead levels, far exceeding the World Health Organization (WHO) guideline value of 10 µg·L−1, persisted for as long as five months in the system. “Lead-free” brass fittings were identified as the source of lead contamination. Physical disturbances, such as renovation works, could cause short-term spikes in lead release. Orthophosphate was able to suppress total lead levels below 10 µg·L−1, but caused “blue water” problems. When orthophosphate addition was ceased, total lead levels began to spike within one week, implying that a continuous supply of orthophosphate was required to control total lead levels. Occasional total lead spikes were observed in one-day stagnation samples throughout the course of the experiments. PMID:26927154

  3. Evaluation of Lead Release in a Simulated Lead-Free Premise Plumbing System Using a Sequential Sampling Approach.

    PubMed

    Ng, Ding-Quan; Lin, Yi-Pin

    2016-02-27

    In this pilot study, a modified sampling protocol was evaluated for the detection of lead contamination and locating the source of lead release in a simulated premise plumbing system with one-, three- and seven-day stagnation for a total period of 475 days. Copper pipes, stainless steel taps and brass fittings were used to assemble the "lead-free" system. Sequential sampling using 100 mL was used to detect lead contamination while that using 50 mL was used to locate the lead source. Elevated lead levels, far exceeding the World Health Organization (WHO) guideline value of 10 µg · L(-1), persisted for as long as five months in the system. "Lead-free" brass fittings were identified as the source of lead contamination. Physical disturbances, such as renovation works, could cause short-term spikes in lead release. Orthophosphate was able to suppress total lead levels below 10 µg · L(-1), but caused "blue water" problems. When orthophosphate addition was ceased, total lead levels began to spike within one week, implying that a continuous supply of orthophosphate was required to control total lead levels. Occasional total lead spikes were observed in one-day stagnation samples throughout the course of the experiments.

  4. Temporal texture of associative encoding modulates recall processes.

    PubMed

    Tibon, Roni; Levy, Daniel A

    2014-02-01

    Binding aspects of an experience that are distributed over time is an important element of episodic memory. In the current study, we examined how the temporal complexity of an experience may govern the processes required for its retrieval. We recorded event-related potentials during episodic cued recall following pair associate learning of concurrently and sequentially presented object-picture pairs. Cued recall success effects over anterior and posterior areas were apparent in several time windows. In anterior locations, these recall success effects were similar for concurrently and sequentially encoded pairs. However, in posterior sites clustered over parietal scalp the effect was larger for the retrieval of sequentially encoded pairs. We suggest that anterior aspects of the mid-latency recall success effects may reflect working-with-memory operations or direct access recall processes, while more posterior aspects reflect recollective processes which are required for retrieval of episodes of greater temporal complexity. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. PARTICLE FILTERING WITH SEQUENTIAL PARAMETER LEARNING FOR NONLINEAR BOLD fMRI SIGNALS.

    PubMed

    Xia, Jing; Wang, Michelle Yongmei

    Analyzing the blood oxygenation level dependent (BOLD) effect in the functional magnetic resonance imaging (fMRI) is typically based on recent ground-breaking time series analysis techniques. This work represents a significant improvement over existing approaches to system identification using nonlinear hemodynamic models. It is important for three reasons. First, instead of using linearized approximations of the dynamics, we present a nonlinear filtering based on the sequential Monte Carlo method to capture the inherent nonlinearities in the physiological system. Second, we simultaneously estimate the hidden physiological states and the system parameters through particle filtering with sequential parameter learning to fully take advantage of the dynamic information of the BOLD signals. Third, during the unknown static parameter learning, we employ the low-dimensional sufficient statistics for efficiency and avoiding potential degeneration of the parameters. The performance of the proposed method is validated using both the simulated data and real BOLD fMRI data.

  6. Comparison of precursor infiltration into polymer thin films via atomic layer deposition and sequential vapor infiltration using in-situ quartz crystal microgravimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padbury, Richard P.; Jur, Jesse S., E-mail: jsjur@ncsu.edu

    Previous research exploring inorganic materials nucleation behavior on polymers via atomic layer deposition indicates the formation of hybrid organic–inorganic materials that form within the subsurface of the polymer. This has inspired adaptations to the process, such as sequential vapor infiltration, which enhances the diffusion of organometallic precursors into the subsurface of the polymer to promote the formation of a hybrid organic–inorganic coating. This work highlights the fundamental difference in mass uptake behavior between atomic layer deposition and sequential vapor infiltration using in-situ methods. In particular, in-situ quartz crystal microgravimetry is used to compare the mass uptake behavior of trimethyl aluminummore » in poly(butylene terephthalate) and polyamide-6 polymer thin films. The importance of trimethyl aluminum diffusion into the polymer subsurface and the subsequent chemical reactions with polymer functional groups are discussed.« less

  7. The effect of sequential exposure of color conditions on time and accuracy of graphic symbol location.

    PubMed

    Alant, Erna; Kolatsis, Anna; Lilienfeld, Margi

    2010-03-01

    An important aspect in AAC concerns the user's ability to locate an aided visual symbol on a communication display in order to facilitate meaningful interaction with partners. Recent studies have suggested that the use of different colored symbols may be influential in the visual search process, and that this, in turn will influence the speed and accuracy of symbol location. This study examined the role of color on rate and accuracy of identifying symbols on an 8-location overlay through the use of 3 color conditions (same, mixed and unique). Sixty typically developing preschool children were exposed to two different sequential exposures (Set 1 and Set 2). Participants searched for a target stimulus (either meaningful symbols or arbitrary forms) in a stimuli array. Findings indicated that the sequential exposures (orderings) impacted both time and accuracy for both types of symbols within specific instances.

  8. Mining sequential patterns for protein fold recognition.

    PubMed

    Exarchos, Themis P; Papaloukas, Costas; Lampros, Christos; Fotiadis, Dimitrios I

    2008-02-01

    Protein data contain discriminative patterns that can be used in many beneficial applications if they are defined correctly. In this work sequential pattern mining (SPM) is utilized for sequence-based fold recognition. Protein classification in terms of fold recognition plays an important role in computational protein analysis, since it can contribute to the determination of the function of a protein whose structure is unknown. Specifically, one of the most efficient SPM algorithms, cSPADE, is employed for the analysis of protein sequence. A classifier uses the extracted sequential patterns to classify proteins in the appropriate fold category. For training and evaluating the proposed method we used the protein sequences from the Protein Data Bank and the annotation of the SCOP database. The method exhibited an overall accuracy of 25% in a classification problem with 36 candidate categories. The classification performance reaches up to 56% when the five most probable protein folds are considered.

  9. Anomaly Detection in Dynamic Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turcotte, Melissa

    2014-10-14

    Anomaly detection in dynamic communication networks has many important security applications. These networks can be extremely large and so detecting any changes in their structure can be computationally challenging; hence, computationally fast, parallelisable methods for monitoring the network are paramount. For this reason the methods presented here use independent node and edge based models to detect locally anomalous substructures within communication networks. As a first stage, the aim is to detect changes in the data streams arising from node or edge communications. Throughout the thesis simple, conjugate Bayesian models for counting processes are used to model these data streams. Amore » second stage of analysis can then be performed on a much reduced subset of the network comprising nodes and edges which have been identified as potentially anomalous in the first stage. The first method assumes communications in a network arise from an inhomogeneous Poisson process with piecewise constant intensity. Anomaly detection is then treated as a changepoint problem on the intensities. The changepoint model is extended to incorporate seasonal behavior inherent in communication networks. This seasonal behavior is also viewed as a changepoint problem acting on a piecewise constant Poisson process. In a static time frame, inference is made on this extended model via a Gibbs sampling strategy. In a sequential time frame, where the data arrive as a stream, a novel, fast Sequential Monte Carlo (SMC) algorithm is introduced to sample from the sequence of posterior distributions of the change points over time. A second method is considered for monitoring communications in a large scale computer network. The usage patterns in these types of networks are very bursty in nature and don’t fit a Poisson process model. For tractable inference, discrete time models are considered, where the data are aggregated into discrete time periods and probability models are fitted to the communication counts. In a sequential analysis, anomalous behavior is then identified from outlying behavior with respect to the fitted predictive probability models. Seasonality is again incorporated into the model and is treated as a changepoint model on the transition probabilities of a discrete time Markov process. Second stage analytics are then developed which combine anomalous edges to identify anomalous substructures in the network.« less

  10. Heat accumulation during sequential cortical bone drilling.

    PubMed

    Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R

    2016-03-01

    Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from <0.5 °C to nearly 13 °C. The difference between drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  11. Sequential ALK Inhibitors Can Select for Lorlatinib-Resistant Compound ALK Mutations in ALK-Positive Lung Cancer.

    PubMed

    Yoda, Satoshi; Lin, Jessica J; Lawrence, Michael S; Burke, Benjamin J; Friboulet, Luc; Langenbucher, Adam; Dardaei, Leila; Prutisto-Chang, Kylie; Dagogo-Jack, Ibiayi; Timofeevski, Sergei; Hubbeling, Harper; Gainor, Justin F; Ferris, Lorin A; Riley, Amanda K; Kattermann, Krystina E; Timonina, Daria; Heist, Rebecca S; Iafrate, A John; Benes, Cyril H; Lennerz, Jochen K; Mino-Kenudson, Mari; Engelman, Jeffrey A; Johnson, Ted W; Hata, Aaron N; Shaw, Alice T

    2018-06-01

    The cornerstone of treatment for advanced ALK-positive lung cancer is sequential therapy with increasingly potent and selective ALK inhibitors. The third-generation ALK inhibitor lorlatinib has demonstrated clinical activity in patients who failed previous ALK inhibitors. To define the spectrum of ALK mutations that confer lorlatinib resistance, we performed accelerated mutagenesis screening of Ba/F3 cells expressing EML4-ALK. Under comparable conditions, N -ethyl- N -nitrosourea (ENU) mutagenesis generated numerous crizotinib-resistant but no lorlatinib-resistant clones harboring single ALK mutations. In similar screens with EML4-ALK containing single ALK resistance mutations, numerous lorlatinib-resistant clones emerged harboring compound ALK mutations. To determine the clinical relevance of these mutations, we analyzed repeat biopsies from lorlatinib-resistant patients. Seven of 20 samples (35%) harbored compound ALK mutations, including two identified in the ENU screen. Whole-exome sequencing in three cases confirmed the stepwise accumulation of ALK mutations during sequential treatment. These results suggest that sequential ALK inhibitors can foster the emergence of compound ALK mutations, identification of which is critical to informing drug design and developing effective therapeutic strategies. Significance: Treatment with sequential first-, second-, and third-generation ALK inhibitors can select for compound ALK mutations that confer high-level resistance to ALK-targeted therapies. A more efficacious long-term strategy may be up-front treatment with a third-generation ALK inhibitor to prevent the emergence of on-target resistance. Cancer Discov; 8(6); 714-29. ©2018 AACR. This article is highlighted in the In This Issue feature, p. 663 . ©2018 American Association for Cancer Research.

  12. Trial Sequential Analysis in systematic reviews with meta-analysis.

    PubMed

    Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian

    2017-03-06

    Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.

  13. NMR Studies of Dynamic Biomolecular Conformational Ensembles

    PubMed Central

    Torchia, Dennis A.

    2015-01-01

    Multidimensional heteronuclear NMR approaches can provide nearly complete sequential signal assignments of isotopically enriched biomolecules. The availability of assignments together with measurements of spin relaxation rates, residual spin interactions, J-couplings and chemical shifts provides information at atomic resolution about internal dynamics on timescales ranging from ps to ms, both in solution and in the solid state. However, due to the complexity of biomolecules, it is not possible to extract a unique atomic-resolution description of biomolecular motions even from extensive NMR data when many conformations are sampled on multiple timescales. For this reason, powerful computational approaches are increasingly applied to large NMR data sets to elucidate conformational ensembles sampled by biomolecules. In the past decade, considerable attention has been directed at an important class of biomolecules that function by binding to a wide variety of target molecules. Questions of current interest are: “Does the free biomolecule sample a conformational ensemble that encompasses the conformations found when it binds to various targets; and if so, on what time scale is the ensemble sampled?” This article reviews recent efforts to answer these questions, with a focus on comparing ensembles obtained for the same biomolecules by different investigators. A detailed comparison of results obtained is provided for three biomolecules: ubiquitin, calmodulin and the HIV-1 trans-activation response RNA. PMID:25669739

  14. Use of X-Ray Absorption Spectroscopy (XAS) to Speciate Manganese in Airborne Particulate Matter from 5 Counties Across the US

    PubMed Central

    Datta, Saugata; Rule, Ana M; Mihalic, Jana N; Chillrud, Steve N; Bostick, Benjamin C.; Ramos-Bonilla, Juan P; Han, Inkyu; Polyak, Lisa M; Geyh, Alison S; Breysse, Patrick N

    2012-01-01

    The purpose of this study is to characterize manganese oxidation states and speciation in airborne particulate matter (PM), and describe how these potentially important determinants of PM toxicity vary by location. Ambient PM samples were collected from five counties across the US using a high volume sequential cyclone system that collects PM in dry bulk form segregated into “coarse” and “fine” size fractions. The fine fraction was analyzed for this study. Analyses included total Mn using ICP-MS, and characterization of oxidation states and speciation using X-ray Absorption Spectroscopy (XAS). XAS spectra of all samples and ten standard compounds of Mn were obtained at the National Synchrotron Light Source. XAS data was analyzed using Linear Combination Fitting (LCF). Results of the LCF analysis describe differences in composition between samples. Mn(II) acetate and Mn(II) oxide are present in all samples, while Mn(II) carbonate and Mn(IV) oxide are absent. To the best of our knowledge, this is the first paper to characterize Mn composition of ambient PM and examine differences between urban sites in the US. Differences in oxidation state and composition indicate regional variations in sources and atmospheric chemistry that may help explain differences in health effects identified in epidemiological studies. PMID:22309075

  15. Flow analysis techniques for phosphorus: an overview.

    PubMed

    Estela, José Manuel; Cerdà, Víctor

    2005-04-15

    A bibliographical review on the implementation and the results obtained in the use of different flow analytical techniques for the determination of phosphorus is carried out. The sources, occurrence and importance of phosphorus together with several aspects regarding the analysis and terminology used in the determination of this element are briefly described. A classification as well as a brief description of the basis, advantages and disadvantages of the different existing flow techniques, namely; segmented flow analysis (SFA), flow injection analysis (FIA), sequential injection analysis (SIA), all injection analysis (AIA), batch injection analysis (BIA), multicommutated FIA (MCFIA), multisyringe FIA (MSFIA) and multipumped FIA (MPFIA) is also carried out. The most relevant manuscripts regarding the analysis of phosphorus by means of flow techniques are herein classified according to the detection instrumental technique used with the aim to facilitate their study and obtain an overall scope. Finally, the analytical characteristics of numerous flow-methods reported in the literature are provided in the form of a table and their applicability to samples with different matrixes, namely water samples (marine, river, estuarine, waste, industrial, drinking, etc.), soils leachates, plant leaves, toothpaste, detergents, foodstuffs (wine, orange juice, milk), biological samples, sugars, fertilizer, hydroponic solutions, soils extracts and cyanobacterial biofilms are tabulated.

  16. Medium rare or well done - how would you like your snail? Influence of cooking practices on the isotopic composition of land snails' shells

    NASA Astrophysics Data System (ADS)

    Kwiecien, O.; Breitenbach, S. F. M.

    2017-12-01

    Since the seminal work of Goodfriend (1992, EPSL 11), several studies confirmed a relation between the isotopic composition (δ18O, δ13C) of land snail shell carbonate, and environmental parameters like precipitation amount, moisture source, temperature and vegetation. This relation, however, is not straightforward and, importantly, site dependent. The choice of sampling strategy (discrete or bulk sampling), cleaning procedure, and/or pre-depositional history further complicate the shell analysis. The advantage of using snail shells as environmental archive lies in their limited mobility, and thus an intrinsic aptitude of recording local and site-specific conditions. However, snail shells found at archaeological sites, even if of local origin, often represent a dietary component and boiling/roasting could potentially alter the isotopic signature of aragonite material. While thermal processing affects the clumped isotope composition of carbonates, its influence on traditional isotopes is still debated (Ritter et al. 2017, Sedimentology; Müller et al. 2017, Scientific Reports). Consequently, a proper sampling strategy is of great importance and should be chosen according to scientific question. Horizontal high-resolution shell sampling (drill holes along growth axis, across growth lines) provides insights into the amplitude of seasonal variability, while vertical high-resolution sampling (multiple drill holes along the same growth line) produces reproducible results. We took advantage of this reproducibility and, on a yet unprecedented scale, experimentally and sequentially tested the influence of boiling on the δ18O and δ13C signature of shells of modern Helix pomatia. Our results challenge recent reports on alteration due to boiling (Müller et al., 2017, Scientific Reports) and support uncompromised application of snail shells from archeological sites for paleoenvironmental reconstructions.

  17. The use of group sequential, information-based sample size re-estimation in the design of the PRIMO study of chronic kidney disease.

    PubMed

    Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi

    2011-04-01

    Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.

  18. Phylogenetic analysis accounting for age-dependent death and sampling with applications to epidemics.

    PubMed

    Lambert, Amaury; Alexander, Helen K; Stadler, Tanja

    2014-07-07

    The reconstruction of phylogenetic trees based on viral genetic sequence data sequentially sampled from an epidemic provides estimates of the past transmission dynamics, by fitting epidemiological models to these trees. To our knowledge, none of the epidemiological models currently used in phylogenetics can account for recovery rates and sampling rates dependent on the time elapsed since transmission, i.e. age of infection. Here we introduce an epidemiological model where infectives leave the epidemic, by either recovery or sampling, after some random time which may follow an arbitrary distribution. We derive an expression for the likelihood of the phylogenetic tree of sampled infectives under our general epidemiological model. The analytic concept developed in this paper will facilitate inference of past epidemiological dynamics and provide an analytical framework for performing very efficient simulations of phylogenetic trees under our model. The main idea of our analytic study is that the non-Markovian epidemiological model giving rise to phylogenetic trees growing vertically as time goes by can be represented by a Markovian "coalescent point process" growing horizontally by the sequential addition of pairs of coalescence and sampling times. As examples, we discuss two special cases of our general model, described in terms of influenza and HIV epidemics. Though phrased in epidemiological terms, our framework can also be used for instance to fit macroevolutionary models to phylogenies of extant and extinct species, accounting for general species lifetime distributions. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. A common mechanism underlies changes of mind about decisions and confidence

    PubMed Central

    van den Berg, Ronald; Anandalingam, Kavitha; Zylberberg, Ariel; Kiani, Roozbeh; Shadlen, Michael N; Wolpert, Daniel M

    2016-01-01

    Decisions are accompanied by a degree of confidence that a selected option is correct. A sequential sampling framework explains the speed and accuracy of decisions and extends naturally to the confidence that the decision rendered is likely to be correct. However, discrepancies between confidence and accuracy suggest that confidence might be supported by mechanisms dissociated from the decision process. Here we show that this discrepancy can arise naturally because of simple processing delays. When participants were asked to report choice and confidence simultaneously, their confidence, reaction time and a perceptual decision about motion were explained by bounded evidence accumulation. However, we also observed revisions of the initial choice and/or confidence. These changes of mind were explained by a continuation of the mechanism that led to the initial choice. Our findings extend the sequential sampling framework to vacillation about confidence and invites caution in interpreting dissociations between confidence and accuracy. DOI: http://dx.doi.org/10.7554/eLife.12192.001 PMID:26829590

  20. Heavy metal accumulation in surface sediments at the port of Cagliari (Sardinia, western Mediterranean): Environmental assessment using sequential extractions and benthic foraminifera.

    PubMed

    Schintu, Marco; Marrucci, Alessandro; Marras, Barbara; Galgani, Francois; Buosi, Carla; Ibba, Angelo; Cherchi, Antonietta

    2016-10-15

    Superficial sediments were taken at the port of Cagliari (Sardinia, Italy), which includes the oil terminal of one of the largest oil refineries in the Mediterranean. Significant trace metal concentrations were found in the whole port area. Sequential extraction of metals from the different sediment fractions (BCR method) showed a higher risk of remobilisation for Cd, which is mostly bound to the exchangeable fraction. Foraminiferal density and richness of species were variable across the study area. The living assemblages were characterized by low diversity in samples collected close to the port areas. Ammonia tepida and bolivinids, which were positively correlated with concentrations of heavy metals and organic matter content, appeared to show tolerance to the environmental disturbance. The sampling sites characterized by the highest values of biotic indices were located far from the port areas and present an epiphytic and epifaunal biocoenosis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.

    PubMed

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-06-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.

  2. Free energy computations by minimization of Kullback-Leibler divergence: An efficient adaptive biasing potential method for sparse representations

    NASA Astrophysics Data System (ADS)

    Bilionis, I.; Koutsourelakis, P. S.

    2012-05-01

    The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.

  3. An exploratory sequential design to validate measures of moral emotions.

    PubMed

    Márquez, Margarita G; Delgado, Ana R

    2017-05-01

    This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.

  4. Sequential Injection Chromatography with an Ultra-short Monolithic Column for the Low-Pressure Separation of α-Tocopherol and γ-Oryzanol in Vegetable Oils and Nutrition Supplements.

    PubMed

    Thaithet, Sujitra; Kradtap Hartwell, Supaporn; Lapanantnoppakhun, Somchai

    2017-01-01

    A low-pressure separation procedure of α-tocopherol and γ-oryzanol was developed based on a sequential injection chromatography (SIC) system coupled with an ultra-short (5 mm) C-18 monolithic column, as a lower cost and more compact alternative to the HPLC system. A green sample preparation, dilution with a small amount of hexane followed by liquid-liquid extraction with 80% ethanol, was proposed. Very good separation resolution (R s = 3.26), a satisfactory separation time (10 min) and a total run time including column equilibration (16 min) were achieved. The linear working range was found to be 0.4 - 40 μg with R 2 being more than 0.99. The detection limits of both analytes were 0.28 μg with the repeatability within 5% RSD (n = 7). Quantitative analyses of the two analytes in vegetable oil and nutrition supplement samples, using the proposed SIC method, agree well with the results from HPLC.

  5. Sequential Extraction Results and Mineralogy of Mine Waste and Stream Sediments Associated With Metal Mines in Vermont, Maine, and New Zealand

    USGS Publications Warehouse

    Piatak, N.M.; Seal, R.R.; Sanzolone, R.F.; Lamothe, P.J.; Brown, Z.A.; Adams, M.

    2007-01-01

    We report results from sequential extraction experiments and the quantitative mineralogy for samples of stream sediments and mine wastes collected from metal mines. Samples were from the Elizabeth, Ely Copper, and Pike Hill Copper mines in Vermont, the Callahan Mine in Maine, and the Martha Mine in New Zealand. The extraction technique targeted the following operationally defined fractions and solid-phase forms: (1) soluble, adsorbed, and exchangeable fractions; (2) carbonates; (3) organic material; (4) amorphous iron- and aluminum-hydroxides and crystalline manganese-oxides; (5) crystalline iron-oxides; (6) sulfides and selenides; and (7) residual material. For most elements, the sum of an element from all extractions steps correlated well with the original unleached concentration. Also, the quantitative mineralogy of the original material compared to that of the residues from two extraction steps gave insight into the effectiveness of reagents at dissolving targeted phases. The data are presented here with minimal interpretation or discussion and further analyses and interpretation will be presented elsewhere.

  6. Value for money in changing clinical practice: should decisions about guidelines and implementation strategies be made sequentially or simultaneously?

    PubMed

    Hoomans, Ties; Severens, Johan L; Evers, Silvia M A A; Ament, Andre J H A

    2009-01-01

    Decisions about clinical practice change, that is, which guidelines to adopt and how to implement them, can be made sequentially or simultaneously. Decision makers adopting a sequential approach first compare the costs and effects of alternative guidelines to select the best set of guideline recommendations for patient management and subsequently examine the implementation costs and effects to choose the best strategy to implement the selected guideline. In an integral approach, decision makers simultaneously decide about the guideline and the implementation strategy on the basis of the overall value for money in changing clinical practice. This article demonstrates that the decision to use a sequential v. an integral approach affects the need for detailed information and the complexity of the decision analytic process. More importantly, it may lead to different choices of guidelines and implementation strategies for clinical practice change. The differences in decision making and decision analysis between the alternative approaches are comprehensively illustrated using 2 hypothetical examples. We argue that, in most cases, an integral approach to deciding about change in clinical practice is preferred, as this provides more efficient use of scarce health-care resources.

  7. PC_Eyewitness: evaluating the New Jersey method.

    PubMed

    MacLin, Otto H; Phelan, Colin M

    2007-05-01

    One important variable in eyewitness identification research is lineup administration procedure. Lineups administered sequentially (one at a time) have been shown to reduce the number of false identifications in comparison with those administered simultaneously (all at once). As a result, some policymakers have adopted sequential administration. However, they have made slight changes to the method used in psychology laboratories. Eyewitnesses in the field are allowed to take multiple passes through a lineup, whereas participants in the laboratory are allowed only one pass. PC_Eyewitness (PCE) is a computerized system used to construct and administer simultaneous or sequential lineups in both the laboratory and the field. It is currently being used in laboratories investigating eyewitness identification in the United States, Canada, and abroad. A modified version of PCE is also being developed for a local police department. We developed a new module for PCE, the New Jersey module, to examine the effects of a second pass. We found that the sequential advantage was eliminated when the participants were allowed to view the lineup a second time. The New Jersey module, and steps we are taking to improve on the module, are presented here and are being made available to the research and law enforcement communities.

  8. Sequential Versus Concurrent Trastuzumab in Adjuvant Chemotherapy for Breast Cancer

    PubMed Central

    Perez, Edith A.; Suman, Vera J.; Davidson, Nancy E.; Gralow, Julie R.; Kaufman, Peter A.; Visscher, Daniel W.; Chen, Beiyun; Ingle, James N.; Dakhil, Shaker R.; Zujewski, JoAnne; Moreno-Aspitia, Alvaro; Pisansky, Thomas M.; Jenkins, Robert B.

    2011-01-01

    Purpose NCCTG (North Central Cancer Treatment Group) N9831 is the only randomized phase III trial evaluating trastuzumab added sequentially or used concurrently with chemotherapy in resected stages I to III invasive human epidermal growth factor receptor 2–positive breast cancer. Patients and Methods Patients received doxorubicin and cyclophosphamide every 3 weeks for four cycles, followed by paclitaxel weekly for 12 weeks (arm A), paclitaxel plus sequential trastuzumab weekly for 52 weeks (arm B), or paclitaxel plus concurrent trastuzumab for 12 weeks followed by trastuzumab for 40 weeks (arm C). The primary end point was disease-free survival (DFS). Results Comparison of arm A (n = 1,087) and arm B (n = 1,097), with 6-year median follow-up and 390 events, revealed 5-year DFS rates of 71.8% and 80.1%, respectively. DFS was significantly increased with trastuzumab added sequentially to paclitaxel (log-rank P < .001; arm B/arm A hazard ratio [HR], 0.69; 95% CI, 0.57 to 0.85). Comparison of arm B (n = 954) and arm C (n = 949), with 6-year median follow-up and 313 events, revealed 5-year DFS rates of 80.1% and 84.4%, respectively. There was an increase in DFS with concurrent trastuzumab and paclitaxel relative to sequential administration (arm C/arm B HR, 0.77; 99.9% CI, 0.53 to 1.11), but the P value (.02) did not cross the prespecified O'Brien-Fleming boundary (.00116) for the interim analysis. Conclusion DFS was significantly improved with 52 weeks of trastuzumab added to adjuvant chemotherapy. On the basis of a positive risk-benefit ratio, we recommend that trastuzumab be incorporated into a concurrent regimen with taxane chemotherapy as an important standard-of-care treatment alternative to a sequential regimen. PMID:22042958

  9. Spatial Distribution and Sampling Plans With Fixed Level of Precision for Citrus Aphids (Hom., Aphididae) on Two Orange Species.

    PubMed

    Kafeshani, Farzaneh Alizadeh; Rajabpour, Ali; Aghajanzadeh, Sirous; Gholamian, Esmaeil; Farkhari, Mohammad

    2018-04-02

    Aphis spiraecola Patch, Aphis gossypii Glover, and Toxoptera aurantii Boyer de Fonscolombe are three important aphid pests of citrus orchards. In this study, spatial distributions of the aphids on two orange species, Satsuma mandarin and Thomson navel, were evaluated using Taylor's power law and Iwao's patchiness. In addition, a fixed-precision sequential sampling plant was developed for each species on the host plant by Green's model at precision levels of 0.25 and 0.1. The results revealed that spatial distribution parameters and therefore the sampling plan were significantly different according to aphid and host plant species. Taylor's power law provides a better fit for the data than Iwao's patchiness regression. Except T. aurantii on Thomson navel orange, spatial distribution patterns of the aphids were aggregative on both citrus. T. aurantii had regular dispersion pattern on Thomson navel orange. Optimum sample size of the aphids varied from 30-2061 and 1-1622 shoots on Satsuma mandarin and Thomson navel orange based on aphid species and desired precision level. Calculated stop lines of the aphid species on Satsuma mandarin and Thomson navel orange ranged from 0.48 to 19 and 0.19 to 80.4 aphids per 24 shoots according to aphid species and desired precision level. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans (RVSP) software. This sampling program is useful for IPM program of the aphids in citrus orchards.

  10. Sequential allosteric mechanism of ATP hydrolysis by the CCT/TRiC chaperone is revealed through Arrhenius analysis

    PubMed Central

    Gruber, Ranit; Levitt, Michael; Horovitz, Amnon

    2017-01-01

    Knowing the mechanism of allosteric switching is important for understanding how molecular machines work. The CCT/TRiC chaperonin nanomachine undergoes ATP-driven conformational changes that are crucial for its folding function. Here, we demonstrate that insight into its allosteric mechanism of ATP hydrolysis can be achieved by Arrhenius analysis. Our results show that ATP hydrolysis triggers sequential ‟conformational waves.” They also suggest that these waves start from subunits CCT6 and CCT8 (or CCT3 and CCT6) and proceed clockwise and counterclockwise, respectively. PMID:28461478

  11. Sequential allosteric mechanism of ATP hydrolysis by the CCT/TRiC chaperone is revealed through Arrhenius analysis.

    PubMed

    Gruber, Ranit; Levitt, Michael; Horovitz, Amnon

    2017-05-16

    Knowing the mechanism of allosteric switching is important for understanding how molecular machines work. The CCT/TRiC chaperonin nanomachine undergoes ATP-driven conformational changes that are crucial for its folding function. Here, we demonstrate that insight into its allosteric mechanism of ATP hydrolysis can be achieved by Arrhenius analysis. Our results show that ATP hydrolysis triggers sequential ‟conformational waves." They also suggest that these waves start from subunits CCT6 and CCT8 (or CCT3 and CCT6) and proceed clockwise and counterclockwise, respectively.

  12. The sequential megafaunal collapse hypothesis: Testing with existing data

    NASA Astrophysics Data System (ADS)

    DeMaster, Douglas P.; Trites, Andrew W.; Clapham, Phillip; Mizroch, Sally; Wade, Paul; Small, Robert J.; Hoef, Jay Ver

    2006-02-01

    Springer et al. [Springer, A.M., Estes, J.A., van Vliet, G.B., Williams, T.M., Doak, D.F., Danner, E.M., Forney, K.A., Pfister, B., 2003. Sequential megafaunal collapse in the North Pacific Ocean: an ongoing legacy of industrial whaling? Proceedings of the National Academy of Sciences 100 (21), 12,223-12,228] hypothesized that great whales were an important prey resource for killer whales, and that the removal of fin and sperm whales by commercial whaling in the region of the Bering Sea/Aleutian Islands (BSAI) in the late 1960s and 1970s led to cascading trophic interactions that caused the sequential decline of populations of harbor seal, northern fur seal, Steller sea lion and northern sea otter. This hypothesis, referred to as the Sequential Megafaunal Collapse (SMC), has stirred considerable interest because of its implication for ecosystem-based management. The SMC has the following assumptions: (1) fin whales and sperm whales were important as prey species in the Bering Sea; (2) the biomass of all large whale species (i.e., North Pacific right, fin, humpback, gray, sperm, minke and bowhead whales) was in decline in the Bering Sea in the 1960s and early 1970s; and (3) pinniped declines in the 1970s and 1980s were sequential. We concluded that the available data are not consistent with the first two assumptions of the SMC. Statistical tests of the timing of the declines do not support the assumption that pinniped declines were sequential. We propose two alternative hypotheses for the declines that are more consistent with the available data. While it is plausible, from energetic arguments, for predation by killer whales to have been an important factor in the declines of one or more of the three populations of pinnipeds and the sea otter population in the BSAI region over the last 30 years, we hypothesize that the declines in pinniped populations in the BSAI can best be understood by invoking a multiple factor hypothesis that includes both bottom-up forcing (as indicated by evidence of nutritional stress in the western Steller sea lion population) and top-down forcing (e.g., predation by killer whales, mortality incidental to commercial fishing, directed harvests). Our second hypothesis is a modification of the top-down forcing mechanism (i.e., killer whale predation on one or more of the pinniped populations and the sea otter population is mediated via the recovery of the eastern North Pacific population of the gray whale). We remain skeptical about the proposed link between commercial whaling on fin and sperm whales, which ended in the mid-1960s, and the observed decline of populations of northern fur seal, harbor seal, and Steller sea lion some 15 years later.

  13. Determination of essential elements in beverages, herbal infusions and dietary supplements using a new straightforward sequential approach based on flame atomic absorption spectrometry.

    PubMed

    Gómez-Nieto, Beatriz; Gismera, Mª Jesús; Sevilla, Mª Teresa; Procopio, Jesús R

    2017-03-15

    A simple method based on FAAS was developed for the sequential multi-element determination of Cu, Zn, Mn, Mg and Si in beverages and food supplements with successful results. The main absorption lines for Cu, Zn and Si and secondary lines for Mn and Mg were selected to carry out the measurements. The sample introduction was performed using a flow injection system. Using the choice of the absorption line wings, the upper limit of the linear range increased up to 110mgL -1 for Mg, 200mgL -1 for Si and 13mgL -1 for Zn. The determination of the five elements was carried out, in triplicate, without the need of additional sample dilutions and/or re-measurements, using less than 3.5mL of sample to perform the complete analysis. The LODs were 0.008mgL -1 for Cu, 0.017mgL -1 for Zn, 0.011mgL -1 for Mn, 0.16mgL -1 for Si and 0.11mgL -1 for Mg. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Involving young people in decision making about sequential cochlear implantation.

    PubMed

    Ion, Rebecca; Cropper, Jenny; Walters, Hazel

    2013-11-01

    The National Institute for Health and Clinical Excellence guidelines recommended young people who currently have one cochlear implant be offered assessment for a second, sequential implant, due to the reported improvements in sound localization and speech perception in noise. The possibility and benefits of group information and counselling assessments were considered. Previous research has shown advantages of group sessions involving young people and their families and such groups which also allow young people opportunity to discuss their concerns separately to their parents/guardians are found to be 'hugely important'. Such research highlights the importance of involving children in decision-making processes. Families considering a sequential cochlear implant were invited to a group information/counselling session, which included time for parents and children to meet separately. Fourteen groups were held with approximately four to five families in each session, totalling 62 patients. The sessions were facilitated by the multi-disciplinary team, with a particular psychological focus in the young people's session. Feedback from families has demonstrated positive support for this format. Questionnaire feedback, to which nine families responded, indicated that seven preferred the group session to an individual session and all approved of separate groups for the child and parents/guardians. Overall the group format and psychological focus were well received in this typically surgical setting and emphasized the importance of involving the young person in the decision-making process. This positive feedback also opens up the opportunity to use a group format in other assessment processes.

  15. Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar

    2016-01-01

    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.

  16. Comparison and Field Validation of Binomial Sampling Plans for Oligonychus perseae (Acari: Tetranychidae) on Hass Avocado in Southern California.

    PubMed

    Lara, Jesus R; Hoddle, Mark S

    2015-08-01

    Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  17. Multidrug resistance among new tuberculosis cases: detecting local variation through lot quality-assurance sampling.

    PubMed

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-03-01

    Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.

  18. Ovalbumin-derived precursor peptides are transferred sequentially from gp96 and calreticulin to MHC I in the endoplasmic reticulum

    PubMed Central

    Kropp, Laura E.; Garg, Manish; Binder, Robert J.

    2010-01-01

    Cellular peptides generated by proteasomal degradation of proteins in the cytosol and destined for presentation by MHC I are associated with several chaperones. Hsp70, hsp90 and the TCP1-ring complex have been implicated as important cytosolic players for chaperoning these peptides. In this study we report that gp96 and calreticulin are essential for chaperoning peptides in the endoplasmic reticulum. Importantly we demonstrate that cellular peptides are transferred sequentially from gp96 to calreticulin and then to MHC I forming a relay line. Disruption of this relay line by removal of gp96 or calreticulin prevents the binding of peptides by MHC I and hence presentation of the MHC I-peptide complex on the cell surface. Our results are important for understanding how peptides are processed and trafficked within the endoplasmic reticulum before exiting in association with MHC I heavy chains and β2-microglobulin as a trimolecular complex. PMID:20410492

  19. Student Academic Performance in Undergraduate Managerial-Accounting Courses

    ERIC Educational Resources Information Center

    Al-Twaijry, Abdulrahman Ali

    2010-01-01

    The author's purpose was to identify potential factors possibly affecting student performance in three sequential management-accounting courses: Managerial Accounting (MA), Cost Accounting (CA), and Advanced Managerial Accounting (AMA) within the Saudi Arabian context. The sample, which was used to test the developed hypotheses, included 312…

  20. ASSESSING SPECIATION AND RELEASE OF HEAVY METALS FROM COAL COMBUSTION PRODUCTS

    EPA Science Inventory

    In this study, the speciation of heavy metals such as arsenic, selenium, lead, zinc and mercury in coal combustion products (CCPs) was evaluated using sequential extraction procedures. Coal fly ash, bottom ash and flue gas desulphurization (FGD) sludge samples were used in the ex...

  1. Tractable Experiment Design via Mathematical Surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    This presentation summarizes the development and implementation of quantitative design criteria motivated by targeted inference objectives for identifying new, potentially expensive computational or physical experiments. The first application is concerned with estimating features of quantities of interest arising from complex computational models, such as quantiles or failure probabilities. A sequential strategy is proposed for iterative refinement of the importance distributions used to efficiently sample the uncertain inputs to the computational model. In the second application, effective use of mathematical surrogates is investigated to help alleviate the analytical and numerical intractability often associated with Bayesian experiment design. This approach allows formore » the incorporation of prior information into the design process without the need for gross simplification of the design criterion. Illustrative examples of both design problems will be presented as an argument for the relevance of these research problems.« less

  2. Determining the speciation of Zn in soils around the sediment ponds of chemical plants by XRD and XAFS spectroscopy and sequential extraction.

    PubMed

    Minkina, Tatiana; Nevidomskaya, Dina; Bauer, Tatiana; Shuvaeva, Victoria; Soldatov, Alexander; Mandzhieva, Saglara; Zubavichus, Yan; Trigub, Alexander

    2018-09-01

    For a correct assessment of risk of polluted soil, it is crucial to establish the speciation and mobility of the contaminants. The aim of this study was to investigate the speciation and transformation of Zn in strongly technogenically transformed contaminated Spolic Technosols for a long time in territory of sludge collectors by combining analytical techniques and synchrotron techniques. Sequential fractionation of Zn compounds in studied soils revealed increasing metal mobility. Phyllosilicates and Fe and Mn hydroxides were the main stabilizers of Zn mobility. A high degree of transformation was identified for the composition of the mineral phase in Spolic Technosols by X-ray powder diffraction. Technogenic phases (Zn-containing authigenic minerals) were revealed in Spolic Technosols samples through the analysis of their Zn K-edge EXAFS and XANES spectra. In one of the samples Zn local environment was formed by predominantly oxygen atoms, and in the other one mixed ZnS and ZnO bonding was found. Zn speciation in the studied technogenically transformed soils was due to the composition of pollutants contaminating the floodplain landscapes for a long time, and, second, this is the combination of physicochemical properties controlling the buffer properties of investigated soils. X-ray spectroscopic and X-ray powder diffraction analyses combined with sequential extraction assays is an effective tool to check the affinity of the soil components for heavy metal cations. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Co-immobilization of glucoamylase and glucose oxidase for electrochemical sequential enzyme electrode for starch biosensor and biofuel cell.

    PubMed

    Lang, Qiaolin; Yin, Long; Shi, Jianguo; Li, Liang; Xia, Lin; Liu, Aihua

    2014-01-15

    A novel electrochemical sequential biosensor was constructed by co-immobilizing glucoamylase (GA) and glucose oxidase (GOD) on the multi-walled carbon nanotubes (MWNTs)-modified glassy carbon electrode (GCE) by chemical crosslinking method, where glutaraldehyde and bovine serum albumin was used as crosslinking and blocking agent, respectively. The proposed biosensor (GA/GOD/MWNTs/GCE) is capable of determining starch without using extra sensors such as Clark-type oxygen sensor or H2O2 sensor. The current linearly decreased with the increasing concentration of starch ranging from 0.005% to 0.7% (w/w) with the limit of detection of 0.003% (w/w) starch. The as-fabricated sequential biosensor can be applicable to the detection of the content of starch in real samples, which are in good accordance with traditional Fehling's titration. Finally, a stable starch/O2 biofuel cell was assembled using the GA/GOD/MWNTs/GCE as bioanode and laccase/MWNTs/GCE as biocathode, which exhibited open circuit voltage of ca. 0.53 V and the maximum power density of 8.15 μW cm(-2) at 0.31 V, comparable with the other glucose/O2 based biofuel cells reported recently. Therefore, the proposed biosensor exhibited attractive features such as good stability in weak acidic buffer, good operational stability, wide linear range and capable of determination of starch in real samples as well as optimal bioanode for the biofuel cell. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.

    2018-06-01

    Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.

  5. Towards simultaneous Talbot bands based optical coherence tomography and scanning laser ophthalmoscopy imaging.

    PubMed

    Marques, Manuel J; Bradu, Adrian; Podoleanu, Adrian Gh

    2014-05-01

    We report a Talbot bands-based optical coherence tomography (OCT) system capable of producing longitudinal B-scan OCT images and en-face scanning laser ophthalmoscopy (SLO) images of the human retina in-vivo. The OCT channel employs a broadband optical source and a spectrometer. A gap is created between the sample and reference beams while on their way towards the spectrometer's dispersive element to create Talbot bands. The spatial separation of the two beams facilitates collection by an SLO channel of optical power originating exclusively from the retina, deprived from any contribution from the reference beam. Three different modes of operation are presented, constrained by the minimum integration time of the camera used in the spectrometer and by the galvo-scanners' scanning rate: (i) a simultaneous acquisition mode over the two channels, useful for small size imaging, that conserves the pixel-to-pixel correspondence between them; (ii) a hybrid sequential mode, where the system switches itself between the two regimes and (iii) a sequential "on-demand" mode, where the system can be used in either OCT or SLO regimes for as long as required. The two sequential modes present varying degrees of trade-off between pixel-to-pixel correspondence and independent full control of parameters within each channel. Images of the optic nerve and fovea regions obtained in the simultaneous (i) and in the hybrid sequential mode (ii) are presented.

  6. Towards simultaneous Talbot bands based optical coherence tomography and scanning laser ophthalmoscopy imaging

    PubMed Central

    Marques, Manuel J.; Bradu, Adrian; Podoleanu, Adrian Gh.

    2014-01-01

    We report a Talbot bands-based optical coherence tomography (OCT) system capable of producing longitudinal B-scan OCT images and en-face scanning laser ophthalmoscopy (SLO) images of the human retina in-vivo. The OCT channel employs a broadband optical source and a spectrometer. A gap is created between the sample and reference beams while on their way towards the spectrometer’s dispersive element to create Talbot bands. The spatial separation of the two beams facilitates collection by an SLO channel of optical power originating exclusively from the retina, deprived from any contribution from the reference beam. Three different modes of operation are presented, constrained by the minimum integration time of the camera used in the spectrometer and by the galvo-scanners’ scanning rate: (i) a simultaneous acquisition mode over the two channels, useful for small size imaging, that conserves the pixel-to-pixel correspondence between them; (ii) a hybrid sequential mode, where the system switches itself between the two regimes and (iii) a sequential “on-demand” mode, where the system can be used in either OCT or SLO regimes for as long as required. The two sequential modes present varying degrees of trade-off between pixel-to-pixel correspondence and independent full control of parameters within each channel. Images of the optic nerve and fovea regions obtained in the simultaneous (i) and in the hybrid sequential mode (ii) are presented. PMID:24877006

  7. Immortal time bias: a frequently unrecognized threat to validity in the evaluation of postoperative radiotherapy.

    PubMed

    Park, Henry S; Gross, Cary P; Makarov, Danil V; Yu, James B

    2012-08-01

    To evaluate the influence of immortal time bias on observational cohort studies of postoperative radiotherapy (PORT) and the effectiveness of sequential landmark analysis to account for this bias. First, we reviewed previous studies of the Surveillance, Epidemiology, and End Results (SEER) database to determine how frequently this bias was considered. Second, we used SEER to select three tumor types (glioblastoma multiforme, Stage IA-IVM0 gastric adenocarcinoma, and Stage II-III rectal carcinoma) for which prospective trials demonstrated an improvement in survival associated with PORT. For each tumor type, we calculated conditional survivals and adjusted hazard ratios of PORT vs. postoperative observation cohorts while restricting the sample at sequential monthly landmarks. Sixty-two percent of previous SEER publications evaluating PORT failed to use a landmark analysis. As expected, delivery of PORT for all three tumor types was associated with improved survival, with the largest associated benefit favoring PORT when all patients were included regardless of survival. Preselecting a cohort with a longer minimum survival sequentially diminished the apparent benefit of PORT. Although the majority of previous SEER articles do not correct for it, immortal time bias leads to altered estimates of PORT effectiveness, which are very sensitive to landmark selection. We suggest the routine use of sequential landmark analysis to account for this bias. Copyright © 2012 Elsevier Inc. All rights reserved.

  8. Immortal Time Bias: A Frequently Unrecognized Threat to Validity in the Evaluation of Postoperative Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Henry S.; Gross, Cary P.; Makarov, Danil V.

    2012-08-01

    Purpose: To evaluate the influence of immortal time bias on observational cohort studies of postoperative radiotherapy (PORT) and the effectiveness of sequential landmark analysis to account for this bias. Methods and Materials: First, we reviewed previous studies of the Surveillance, Epidemiology, and End Results (SEER) database to determine how frequently this bias was considered. Second, we used SEER to select three tumor types (glioblastoma multiforme, Stage IA-IVM0 gastric adenocarcinoma, and Stage II-III rectal carcinoma) for which prospective trials demonstrated an improvement in survival associated with PORT. For each tumor type, we calculated conditional survivals and adjusted hazard ratios of PORTmore » vs. postoperative observation cohorts while restricting the sample at sequential monthly landmarks. Results: Sixty-two percent of previous SEER publications evaluating PORT failed to use a landmark analysis. As expected, delivery of PORT for all three tumor types was associated with improved survival, with the largest associated benefit favoring PORT when all patients were included regardless of survival. Preselecting a cohort with a longer minimum survival sequentially diminished the apparent benefit of PORT. Conclusions: Although the majority of previous SEER articles do not correct for it, immortal time bias leads to altered estimates of PORT effectiveness, which are very sensitive to landmark selection. We suggest the routine use of sequential landmark analysis to account for this bias.« less

  9. Automatic instrument for chemical processing to detect microorganism in biological samples by measuring light reactions

    NASA Technical Reports Server (NTRS)

    Kelbaugh, B. N.; Picciolo, G. L.; Chappelle, E. W.; Colburn, M. E. (Inventor)

    1973-01-01

    An automated apparatus is reported for sequentially assaying urine samples for the presence of bacterial adenosine triphosphate (ATP) that comprises a rotary table which carries a plurality of sample containing vials and automatically dispenses fluid reagents into the vials preparatory to injecting a light producing luciferase-luciferin mixture into the samples. The device automatically measures the light produced in each urine sample by a bioluminescence reaction of the free bacterial adenosine triphosphate with the luciferase-luciferin mixture. The light measured is proportional to the concentration of bacterial adenosine triphosphate which, in turn, is proportional to the number of bacteria present in the respective urine sample.

  10. A Sequential Chemical Extraction and Spectroscopic Assessment of the Potential Bioavailability of Mercury Released From the Inoperative New Idria Mercury Mine, San Benito Co., CA

    NASA Astrophysics Data System (ADS)

    Jew, A. D.; Luong, P. N.; Rytuba, J. J.; Brown, G. E.

    2012-12-01

    The inoperative New Idria mercury mine in San Benito Co., CA, is a potential point source of Hg to the Central Valley of California. To determine the phases and the potential bioavailability of Hg present in stream bed deposits downstream of the mine, sequential chemical extractions (SCEs) targeting Hg-bearing phases and synchrotron-based spectroscopic and imaging techniques were used on sediment samples taken from the acid mine drainage (AMD) system, Hg sorbed in the laboratory to ferrihydrite (synthetic 2-line and natural), and Hg associated with diatom-rich samples. In all field samples examined, both the wet and dry seasons, removal of > 97% of the Hg required 1M KOH or harsher chemical treatments. X-ray absorption spectroscopy (XAS) showed that HgS was the dominant inorganic Hg phase present, with no detectable Hg associated with the ferrihydrite. Uptake and subsequent SCE analysis of Hg to both synthetic and natural ferrihydrite showed that 1M MgCl2 removed ≥ 90% of the total Hg, suggesting that Hg does not sorb strongly to ferrihydrite. This finding is surprising, because in most settings ferrihydrite is considered to be a strong adsorbent of heavy metals. Due to the lack of Hg sorption to ferrihydrite in field samples, another pool for the non-HgS/HgSe fraction in sediments is needed. SEM analysis of the downstream samples showed that regardless of pH, freshwater diatoms were present. To determine if diatoms were the sink for dissolved Hg in this system, SCE analysis on commercially available and diatom-rich field samples from the New Idria site and Harley Gulch (Lake County, CA) were completed. The vast majority of Hg in diatom-rich samples was removed by 1M KOH, which corresponds to the non-HgS/HgSe fraction of the New Idria field samples. Analysis for carbon and nitrogen in the diatom-rich samples showed no detectable nitrogen, indicating little to no organic material was left in the samples. We therefore infer that Hg in the diatoms is contained in the silica tests. The possibility of Hg being incorporated in the silica tests of diatoms is surprising, but one of the few known Hg-silicates (Edgarbailyite) was first discovered in the New Idria forming under ambient conditions, thus, a Hg-silicate species is possible. The stability of Hg contained in diatoms is important because it represents a new sink for dissolved Hg in an impacted system that was previously unknown. Freshwater diatoms present in the New Idria drainage system were found to contain significant quantities (30-60%) of Hg in the non-HgS SCE fractions, similar to the New Idria sediments, and are thought to be the major association of Hg in this system. SCE analyses of Hg(II) sorbed to synthetic 2-line and natural ferrihydrite in the laboratory showed that Hg(II) does not bind strongly to either material. The adsorption/incorporation of Hg(II) with the silica tests of diatoms is an important discovery and has major implications for passive remediation strategies for Hg in natural systems. Because the vast majority of Hg contained in sediments downstream of the New Idria site require 1M KOH or harsher chemical treatment for removal, the Hg released from New Idria can be considered to be environmentally stable.

  11. The Emergence of Explicit Knowledge in a Serial Reaction Time Task: The Role of Experienced Fluency and Strength of Representation.

    PubMed

    Esser, Sarah; Haider, Hilde

    2017-01-01

    The Serial Reaction Time Task (SRTT) is an important paradigm to study the properties of unconscious learning processes. One specifically interesting and still controversially discussed topic are the conditions under which unconsciously acquired knowledge becomes conscious knowledge. The different assumptions about the underlying mechanisms can contrastively be separated into two accounts: single system views in which the strengthening of associative weights throughout training gradually turns implicit knowledge into explicit knowledge, and dual system views in which implicit knowledge itself does not become conscious. Rather, it requires a second process which detects changes in performance and is able to acquire conscious knowledge. In a series of three experiments, we manipulated the arrangement of sequential and deviant trials. In an SRTT training, participants either received mini-blocks of sequential trials followed by mini-blocks of deviant trials (22 trials each) or they received sequential and deviant trials mixed randomly. Importantly the number of correct and deviant transitions was the same for both conditions. Experiment 1 showed that both conditions acquired a comparable amount of implicit knowledge, expressed in different test tasks. Experiment 2 further demonstrated that both conditions differed in their subjectively experienced fluency of the task, with more fluency experienced when trained with mini-blocks. Lastly, Experiment 3 revealed that the participants trained with longer mini-blocks of sequential and deviant material developed more explicit knowledge. Results are discussed regarding their compatibility with different assumptions about the emergence of explicit knowledge in an implicit learning situation, especially with respect to the role of metacognitive judgements and more specifically the Unexpected-Event Hypothesis.

  12. Sequential slip transfer of mixed-character dislocations across Σ3 coherent twin boundary in FCC metals: a concurrent atomistic-continuum study

    DOE PAGES

    Xu, Shuozhi; Xiong, Liming; Chen, Youping; ...

    2016-01-29

    Sequential slip transfer across grain boundaries (GB) has an important role in size-dependent propagation of plastic deformation in polycrystalline metals. For example, the Hall–Petch effect, which states that a smaller average grain size results in a higher yield stress, can be rationalised in terms of dislocation pile-ups against GBs. In spite of extensive studies in modelling individual phases and grains using atomistic simulations, well-accepted criteria of slip transfer across GBs are still lacking, as well as models of predicting irreversible GB structure evolution. Slip transfer is inherently multiscale since both the atomic structure of the boundary and the long-range fieldsmore » of the dislocation pile-up come into play. In this work, concurrent atomistic-continuum simulations are performed to study sequential slip transfer of a series of curved dislocations from a given pile-up on Σ3 coherent twin boundary (CTB) in Cu and Al, with dominant leading screw character at the site of interaction. A Frank-Read source is employed to nucleate dislocations continuously. It is found that subject to a shear stress of 1.2 GPa, screw dislocations transfer into the twinned grain in Cu, but glide on the twin boundary plane in Al. Moreover, four dislocation/CTB interaction modes are identified in Al, which are affected by (1) applied shear stress, (2) dislocation line length, and (3) dislocation line curvature. Our results elucidate the discrepancies between atomistic simulations and experimental observations of dislocation-GB reactions and highlight the importance of directly modeling sequential dislocation slip transfer reactions using fully 3D models.« less

  13. Sequential slip transfer of mixed-character dislocations across Σ3 coherent twin boundary in FCC metals: a concurrent atomistic-continuum study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Shuozhi; Xiong, Liming; Chen, Youping

    Sequential slip transfer across grain boundaries (GB) has an important role in size-dependent propagation of plastic deformation in polycrystalline metals. For example, the Hall–Petch effect, which states that a smaller average grain size results in a higher yield stress, can be rationalised in terms of dislocation pile-ups against GBs. In spite of extensive studies in modelling individual phases and grains using atomistic simulations, well-accepted criteria of slip transfer across GBs are still lacking, as well as models of predicting irreversible GB structure evolution. Slip transfer is inherently multiscale since both the atomic structure of the boundary and the long-range fieldsmore » of the dislocation pile-up come into play. In this work, concurrent atomistic-continuum simulations are performed to study sequential slip transfer of a series of curved dislocations from a given pile-up on Σ3 coherent twin boundary (CTB) in Cu and Al, with dominant leading screw character at the site of interaction. A Frank-Read source is employed to nucleate dislocations continuously. It is found that subject to a shear stress of 1.2 GPa, screw dislocations transfer into the twinned grain in Cu, but glide on the twin boundary plane in Al. Moreover, four dislocation/CTB interaction modes are identified in Al, which are affected by (1) applied shear stress, (2) dislocation line length, and (3) dislocation line curvature. Our results elucidate the discrepancies between atomistic simulations and experimental observations of dislocation-GB reactions and highlight the importance of directly modeling sequential dislocation slip transfer reactions using fully 3D models.« less

  14. The Emergence of Explicit Knowledge in a Serial Reaction Time Task: The Role of Experienced Fluency and Strength of Representation

    PubMed Central

    Esser, Sarah; Haider, Hilde

    2017-01-01

    The Serial Reaction Time Task (SRTT) is an important paradigm to study the properties of unconscious learning processes. One specifically interesting and still controversially discussed topic are the conditions under which unconsciously acquired knowledge becomes conscious knowledge. The different assumptions about the underlying mechanisms can contrastively be separated into two accounts: single system views in which the strengthening of associative weights throughout training gradually turns implicit knowledge into explicit knowledge, and dual system views in which implicit knowledge itself does not become conscious. Rather, it requires a second process which detects changes in performance and is able to acquire conscious knowledge. In a series of three experiments, we manipulated the arrangement of sequential and deviant trials. In an SRTT training, participants either received mini-blocks of sequential trials followed by mini-blocks of deviant trials (22 trials each) or they received sequential and deviant trials mixed randomly. Importantly the number of correct and deviant transitions was the same for both conditions. Experiment 1 showed that both conditions acquired a comparable amount of implicit knowledge, expressed in different test tasks. Experiment 2 further demonstrated that both conditions differed in their subjectively experienced fluency of the task, with more fluency experienced when trained with mini-blocks. Lastly, Experiment 3 revealed that the participants trained with longer mini-blocks of sequential and deviant material developed more explicit knowledge. Results are discussed regarding their compatibility with different assumptions about the emergence of explicit knowledge in an implicit learning situation, especially with respect to the role of metacognitive judgements and more specifically the Unexpected-Event Hypothesis. PMID:28421018

  15. Physical activity in England: who is meeting the recommended level of participation through sports and exercise?

    PubMed

    Anokye, Nana Kwame; Pokhrel, Subhash; Buxton, Martin; Fox-Rushby, Julia

    2013-06-01

    Little is known about the correlates of meeting recommended levels of participation in physical activity (PA) and how this understanding informs public health policies on behaviour change. To analyse who meets the recommended level of participation in PA in males and females separately by applying 'process' modelling frameworks (single vs. sequential 2-step process). Using the Health Survey for England 2006, (n = 14 142; ≥ 16 years), gender-specific regression models were estimated using bivariate probit with selectivity correction and single probit models. A 'sequential, 2-step process' modelled participation and meeting the recommended level separately, whereas the 'single process' considered both participation and level together. In females, meeting the recommended level was associated with degree holders [Marginal effect (ME) = 0.013] and age (ME = -0.001), whereas in males, age was a significant correlate (ME = -0.003 to -0.004). The order of importance of correlates was similar across genders, with ethnicity being the most important correlate in both males (ME = -0.060) and females (ME = -0.133). In females, the 'sequential, 2-step process' performed better (ρ = -0.364, P < 0.001) than that in males (ρ = 0.154). The degree to which people undertake the recommended level of PA through vigorous activity varies between males and females, and the process that best predicts such decisions, i.e. whether it is a sequential, 2-step process or a single-step choice, is also different for males and females. Understanding this should help to identify subgroups that are less likely to meet the recommended level of PA (and hence more likely to benefit from any PA promotion intervention).

  16. Flexible sequential designs for multi-arm clinical trials.

    PubMed

    Magirr, D; Stallard, N; Jaki, T

    2014-08-30

    Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Dynamic sample size detection in learning command line sequence for continuous authentication.

    PubMed

    Traore, Issa; Woungang, Isaac; Nakkabi, Youssef; Obaidat, Mohammad S; Ahmed, Ahmed Awad E; Khalilian, Bijan

    2012-10-01

    Continuous authentication (CA) consists of authenticating the user repetitively throughout a session with the goal of detecting and protecting against session hijacking attacks. While the accuracy of the detector is central to the success of CA, the detection delay or length of an individual authentication period is important as well since it is a measure of the window of vulnerability of the system. However, high accuracy and small detection delay are conflicting requirements that need to be balanced for optimum detection. In this paper, we propose the use of sequential sampling technique to achieve optimum detection by trading off adequately between detection delay and accuracy in the CA process. We illustrate our approach through CA based on user command line sequence and naïve Bayes classification scheme. Experimental evaluation using the Greenberg data set yields encouraging results consisting of a false acceptance rate (FAR) of 11.78% and a false rejection rate (FRR) of 1.33%, with an average command sequence length (i.e., detection delay) of 37 commands. When using the Schonlau (SEA) data set, we obtain FAR = 4.28% and FRR = 12%.

  18. Multidrug Resistance among New Tuberculosis Cases: Detecting Local Variation through Lot Quality-Assurance Sampling

    PubMed Central

    Lynn Hedt, Bethany; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Viet Nhung, Nguyen; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-01-01

    Background Current methodology for multidrug-resistant TB (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. Methods We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored three classification systems—two-way static, three-way static, and three-way truncated sequential sampling—at two sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. Results The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Conclusions Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired. PMID:22249242

  19. Seeking Positive Experiences Can Produce Illusory Correlations

    ERIC Educational Resources Information Center

    Denrell, Jerker; Le Mens, Gael

    2011-01-01

    Individuals tend to select again alternatives about which they have positive impressions and to avoid alternatives about which they have negative impressions. Here we show how this sequential sampling feature of the information acquisition process leads to the emergence of an illusory correlation between estimates of the attributes of…

  20. The Relationship between the Emotional Intelligence of Secondary Public School Principals and School Performance

    ERIC Educational Resources Information Center

    Ashworth, Stephanie R.

    2013-01-01

    The study examined the relationship between secondary public school principals' emotional intelligence and school performance. The correlational study employed an explanatory sequential mixed methods model. The non-probability sample consisted of 105 secondary public school principals in Texas. The emotional intelligence characteristics of the…

  1. The Condition of Education 2010 in Brief. NCES 2010-029

    ERIC Educational Resources Information Center

    Aud, Susan, Ed.; Hannes, Gretchen, Ed.

    2010-01-01

    This publication contains a sample of the indicators in "The Condition of Education 2010." The indicators in this publication are numbered sequentially, rather than according to their numbers in the complete edition. Since 1870, the federal government has gathered data about students, teachers, schools, and education funding. As mandated…

  2. Parenting and Trajectories of Children's Maladaptive Behaviors: A 12-Year Prospective Community Study

    ERIC Educational Resources Information Center

    Luyckx, Koen; Tildesley, Elizabeth A.; Soenens, Bart; Andrews, Judy A.; Hampson, Sarah E.; Peterson, Missy; Duriez, Bart

    2011-01-01

    This study investigated how parenting accounted for interindividual differences in developmental trajectories of different child behaviors across childhood and adolescence. In a cohort sequential community sample of 1,049 children, latent class growth analysis was applied to three parent-reported dimensions (monitoring, positive parenting,…

  3. 9 CFR 113.309 - Bovine Parainfluenza3 Vaccine.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ...-challenge for serum antibody studies. (6) Satisfactory Test Criteria: (i) All virus isolations attempts... develop antibody titers of 1:32 or greater by day 6 ±2 days post-challenge. (8) A sequential test... parainfluenza, susceptible calves shall be used as test animals (20 vaccinates and five controls). Blood samples...

  4. 9 CFR 113.309 - Bovine Parainfluenza3 Vaccine.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...-challenge for serum antibody studies. (6) Satisfactory Test Criteria: (i) All virus isolations attempts... develop antibody titers of 1:32 or greater by day 6 ±2 days post-challenge. (8) A sequential test... parainfluenza, susceptible calves shall be used as test animals (20 vaccinates and five controls). Blood samples...

  5. 9 CFR 113.309 - Bovine Parainfluenza3 Vaccine.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...-challenge for serum antibody studies. (6) Satisfactory Test Criteria: (i) All virus isolations attempts... develop antibody titers of 1:32 or greater by day 6 ±2 days post-challenge. (8) A sequential test... parainfluenza, susceptible calves shall be used as test animals (20 vaccinates and five controls). Blood samples...

  6. Teacher Perceptions of Principals' Leadership Qualities: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Hauserman, Cal P.; Ivankova, Nataliya V.; Stick, Sheldon L.

    2013-01-01

    This mixed methods sequential explanatory study utilized the Multi-factor Leadership Questionnaire, responses to open-ended questions, and in-depth interviews to identify transformational leadership qualities that were present among principals in Alberta, Canada. The first quantitative phase consisted of a random sample of 135 schools (with…

  7. Understanding Single-Stranded Telomere End Binding by an Essential Protein

    DTIC Science & Technology

    2000-08-01

    BioPharma Inc., 1885 33rd Street, Boulder, CO 80301 Traditional sequential medicinal chemistry methods have been augmented by combinatorial synthesis...on the same wells that were being analyzed in parallel by RP-HPLC/UV for purity. The sampling protocol for purity determination at Array BioPharma is

  8. K-ABC Mental Processing Profiles for Gifted Referrals.

    ERIC Educational Resources Information Center

    Harrison, Patti L.; And Others

    This study sought to extend previous research by investigating performance of intellectucally gifted children on the Mental Processing Composite of the Kaufman Assessment Battery for Children (K-ABC). A sample of 54 children (aged 6-12) referred for possible gifted placement were administered the Sequential and Simultaneous scales. Average scores…

  9. 49 CFR 563.8 - Data format.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the first acceleration data point; (3) The number of the last point (NLP), which is an integer that...; and (4) NLP—NFP + 1 acceleration values sequentially beginning with the acceleration at time NFP * TS and continue sampling the acceleration at TS increments in time until the time NLP * TS is reached...

  10. Improved coverage of cDNA-AFLP by sequential digestion of immobilized cDNA.

    PubMed

    Weiberg, Arne; Pöhler, Dirk; Morgenstern, Burkhard; Karlovsky, Petr

    2008-10-13

    cDNA-AFLP is a transcriptomics technique which does not require prior sequence information and can therefore be used as a gene discovery tool. The method is based on selective amplification of cDNA fragments generated by restriction endonucleases, electrophoretic separation of the products and comparison of the band patterns between treated samples and controls. Unequal distribution of restriction sites used to generate cDNA fragments negatively affects the performance of cDNA-AFLP. Some transcripts are represented by more than one fragment while other escape detection, causing redundancy and reducing the coverage of the analysis, respectively. With the goal of improving the coverage of cDNA-AFLP without increasing its redundancy, we designed a modified cDNA-AFLP protocol. Immobilized cDNA is sequentially digested with several restriction endonucleases and the released DNA fragments are collected in mutually exclusive pools. To investigate the performance of the protocol, software tool MECS (Multiple Enzyme cDNA-AFLP Simulation) was written in Perl. cDNA-AFLP protocols described in the literature and the new sequential digestion protocol were simulated on sets of cDNA sequences from mouse, human and Arabidopsis thaliana. The redundancy and coverage, the total number of PCR reactions, and the average fragment length were calculated for each protocol and cDNA set. Simulation revealed that sequential digestion of immobilized cDNA followed by the partitioning of released fragments into mutually exclusive pools outperformed other cDNA-AFLP protocols in terms of coverage, redundancy, fragment length, and the total number of PCRs. Primers generating 30 to 70 amplicons per PCR provided the highest fraction of electrophoretically distinguishable fragments suitable for normalization. For A. thaliana, human and mice transcriptome, the use of two marking enzymes and three sequentially applied releasing enzymes for each of the marking enzymes is recommended.

  11. The biogeochemical iron cycle and astrobiology

    NASA Astrophysics Data System (ADS)

    Schröder, Christian; Köhler, Inga; Muller, Francois L. L.; Chumakov, Aleksandr I.; Kupenko, Ilya; Rüffer, Rudolf; Kappler, Andreas

    2016-12-01

    Biogeochemistry investigates chemical cycles which influence or are influenced by biological activity. Astrobiology studies the origin, evolution and distribution of life in the universe. The biogeochemical Fe cycle has controlled major nutrient cycles such as the C cycle throughout geological time. Iron sulfide minerals may have provided energy and surfaces for the first pioneer organisms on Earth. Banded iron formations document the evolution of oxygenic photosynthesis. To assess the potential habitability of planets other than Earth one looks for water, an energy source and a C source. On Mars, for example, Fe minerals have provided evidence for the past presence of liquid water on its surface and would provide a viable energy source. Here we present Mössbauer spectroscopy investigations of Fe and C cycle interactions in both ancient and modern environments. Experiments to simulate the diagenesis of banded iron formations indicate that the formation of ferrous minerals depends on the amount of biomass buried with ferric precursors rather than on the atmospheric composition at the time of deposition. Mössbauer spectra further reveal the mutual stabilisation of Fe-organic matter complexes against mineral transformation and decay of organic matter into CO2. This corresponds to observations of a `rusty carbon sink' in modern sediments. The stabilisation of Fe-organic matter complexes may also aid transport of particulate Fe in the water column while having an adverse effect on the bioavailability of Fe. In the modern oxic ocean, Fe is insoluble and particulate Fe represents an important source. Collecting that particulate Fe yields small sample sizes that would pose a challenge for conventional Mössbauer experiments. We demonstrate that the unique properties of the beam used in synchrotron-based Mössbauer applications can be utilized for studying such samples effectively. Reactive Fe species often occur in amorphous or nanoparticulate form in the environment and are therefore difficult to study with standard mineralogical tools. Sequential extraction techniques are commonly used as proxies. We provide an example where Mössbauer spectroscopy can replace sequential extraction techniques where mineralogical information is sought. Where mineral separation is needed, for example in the investigation of Fe or S isotope fractionation, Mössbauer spectroscopy can help to optimize sequential extraction procedures. This can be employed in a large number of investigations of soils and sediments, potentially even for mineral separation to study Fe and S isotope fractionation in samples returned from Mars, which might reveal signatures of biological activity. When looking for the possibility of life outside Earth, Jupiter's icy moon Europa is one of the most exciting places. It may be just in reach for a Mössbauer spectrometer deployed by a future lander to study the red streak mineral deposits on its surface to look for clues about the composition of the ocean hidden under the moon's icy surface.

  12. Flash Desorption/Mass Spectrometry for the Analysis of Less- and Nonvolatile Samples Using a Linearly Driven Heated Metal Filament

    NASA Astrophysics Data System (ADS)

    Usmanov, Dilshadbek T.; Ninomiya, Satoshi; Hiraoka, Kenzo

    2013-11-01

    In this paper, the important issue of the desorption of less- and nonvolatile compounds with minimal sample decomposition in ambient mass spectrometry is approached using ambient flash desorption mass spectrometry. The preheated stainless steel filament was driven down and up along the vertical axis in 0.3 s. At the lowest position, it touched the surface of the sample with an invasion depth of 0.1 mm in 50 ms (flash heating) and was removed from the surface (fast cooling). The heating rate corresponds to ~104 °C/s at the filament temperature of 500 °C. The desorbed gaseous molecules were ionized by using a dielectric barrier discharge ion source, and the produced ions were detected by a time-of-flight (TOF) mass spectrometer. Less-volatile samples, such as pharmaceutical tablets, narcotics, explosives, and C60 gave molecular and protonated molecule ions as major ions with thermal decomposition minimally suppressed. For synthetic polymers (PMMA, PLA, and PS), the mass spectra reflected their backbone structures because of the suppression of the sequential thermal decompositions of the primary products. The present technique appears to be suitable for high-throughput qualitative analyses of many types of solid samples in the range from a few ng to 10 μg with minimal sample consumption. Some contribution from tribodesorption in addition to thermal desorption was suggested for the desorption processes. [Figure not available: see fulltext.

  13. How Do Substitute Teachers Substitute? An Empirical Study of Substitute-Teacher Labor Supply

    ERIC Educational Resources Information Center

    Gershenson, Seth

    2012-01-01

    This paper examines the daily labor supply of a potentially important, but often overlooked, source of instruction in U.S. public schools: substitute teachers. I estimate a sequential binary-choice model of substitute teachers' job-offer acceptance decisions using data on job offers made by a randomized automated calling system. Importantly, this…

  14. Combined Parameter and State Estimation Problem in a Complex Domain: RF Hyperthermia Treatment Using Nanoparticles

    NASA Astrophysics Data System (ADS)

    Bermeo Varon, L. A.; Orlande, H. R. B.; Eliçabe, G. E.

    2016-09-01

    The particle filter methods have been widely used to solve inverse problems with sequential Bayesian inference in dynamic models, simultaneously estimating sequential state variables and fixed model parameters. This methods are an approximation of sequences of probability distributions of interest, that using a large set of random samples, with presence uncertainties in the model, measurements and parameters. In this paper the main focus is the solution combined parameters and state estimation in the radiofrequency hyperthermia with nanoparticles in a complex domain. This domain contains different tissues like muscle, pancreas, lungs, small intestine and a tumor which is loaded iron oxide nanoparticles. The results indicated that excellent agreements between estimated and exact value are obtained.

  15. Sequentially Executed Model Evaluation Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-10-20

    Provides a message passing framework between generic input, model and output drivers, and specifies an API for developing such drivers. Also provides batch and real-time controllers which step the model and I/O through the time domain (or other discrete domain), and sample I/O drivers. This is a library framework, and does not, itself, solve any problems or execute any modeling. The SeMe framework aids in development of models which operate on sequential information, such as time-series, where evaluation is based on prior results combined with new data for this iteration. Has applications in quality monitoring, and was developed as partmore » of the CANARY-EDS software, where real-time water quality data is being analyzed for anomalies.« less

  16. Online sequential Monte Carlo smoother for partially observed diffusion processes

    NASA Astrophysics Data System (ADS)

    Gloaguen, Pierre; Étienne, Marie-Pierre; Le Corff, Sylvain

    2018-12-01

    This paper introduces a new algorithm to approximate smoothed additive functionals of partially observed diffusion processes. This method relies on a new sequential Monte Carlo method which allows to compute such approximations online, i.e., as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. The original algorithm cannot be used in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that it may be extended to partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. This estimator is proved to be consistent and its performance are illustrated using data from two models.

  17. Adaptive web sampling.

    PubMed

    Thompson, Steven K

    2006-12-01

    A flexible class of adaptive sampling designs is introduced for sampling in network and spatial settings. In the designs, selections are made sequentially with a mixture distribution based on an active set that changes as the sampling progresses, using network or spatial relationships as well as sample values. The new designs have certain advantages compared with previously existing adaptive and link-tracing designs, including control over sample sizes and of the proportion of effort allocated to adaptive selections. Efficient inference involves averaging over sample paths consistent with the minimal sufficient statistic. A Markov chain resampling method makes the inference computationally feasible. The designs are evaluated in network and spatial settings using two empirical populations: a hidden human population at high risk for HIV/AIDS and an unevenly distributed bird population.

  18. Sequential determination of multi-nutrient elements in natural water samples with a reverse flow injection system.

    PubMed

    Lin, Kunning; Ma, Jian; Yuan, Dongxing; Feng, Sichao; Su, Haitao; Huang, Yongming; Shangguan, Qipei

    2017-05-15

    An integrated system was developed for automatic and sequential determination of NO 2 - , NO 3 - , PO 4 3- , Fe 2+ , Fe 3+ and Mn 2+ in natural waters based on reverse flow injection analysis combined with spectrophotometric detection. The system operation was controlled by a single chip microcomputer and laboratory-programmed software written in LabVIEW. The experimental parameters for each nutrient element analysis were optimized based on a univariate experimental design, and interferences from common ions were evaluated. The upper limits of the linear range (along with detection limit, µmolL -1 ) of the proposed method was 20 (0.03), 200 (0.7), 12 (0.3), 5 (0.03), 5 (0.03), 9 (0.2) µmolL -1 , for NO 2 - , NO 3 - , PO 4 3- , Fe 2+ , Fe 3+ and Mn 2+ , respectively. The relative standard deviations were below 5% (n=9-13) and the recoveries varied from 88.0±1.0% to 104.5±1.0% for spiked water samples. The sample throughput was about 20h -1 . This system has been successfully applied for the determination of multi-nutrient elements in different kinds of water samples and showed good agreement with reference methods (slope 1.0260±0.0043, R 2 =0.9991, n=50). Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Rheo-optical near-infrared (NIR) spectroscopy study of partially miscible polymer blend of polymethyl methacrylate (PMMA) and polyethylene glycol (PEG)

    NASA Astrophysics Data System (ADS)

    Shinzawa, Hideyuki; Mizukado, Junji

    2018-03-01

    Tensile deformations of a partially miscible blend of polymethyl methacrylate (PMMA) and polyethylene glycol (PEG) is studied by a rheo-optical characterization near-infrared (NIR) technique to probe deformation behavior during tensile deformation. Sets of NIR spectra of the polymer samples were collected by using an acousto-optic tunable filter (AOTF) NIR spectrometer coupled with a tensile testing machine as an excitation device. While deformations of the samples were readily captured as strain-dependent NIR spectra, the entire feature of the spectra was overwhelmed with the baseline fluctuation induced by the decrease in the sample thickness and subsequent change in the light scattering. Several pretreatment techniques, including multiplicative scatter collection (MSC) and null-space projection, are subjected to the NIR spectra prior to the determination of the sequential order of the spectral intensity changes by two-dimensional (2D) correlation analysis. The comparison of the MSC and null-space projection provided an interesting insight into the system, especially deformation-induced variation of light scattering observed during the tensile testing of the polymer sample. In addition, the sequential order determined with the 2D correlation spectra revealed that orientation of a specific part of PMMA chain occurs before that of the others because of the interaction between Cdbnd O group of PMMA and terminal sbnd OH group of PEG.

  20. Short-term memory for spatial, sequential and duration information.

    PubMed

    Manohar, Sanjay G; Pertzov, Yoni; Husain, Masud

    2017-10-01

    Space and time appear to play key roles in the way that information is organized in short-term memory (STM). Some argue that they are crucial contexts within which other stored features are embedded, allowing binding of information that belongs together within STM. Here we review recent behavioral, neurophysiological and imaging studies that have sought to investigate the nature of spatial, sequential and duration representations in STM, and how these might break down in disease. Findings from these studies point to an important role of the hippocampus and other medial temporal lobe structures in aspects of STM, challenging conventional accounts of involvement of these regions in only long-term memory.

  1. A sequential method for spline approximation with variable knots. [recursive piecewise polynomial signal processing

    NASA Technical Reports Server (NTRS)

    Mier Muth, A. M.; Willsky, A. S.

    1978-01-01

    In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.

  2. Cell wall invertase as a regulator in determining sequential development of endosperm and embryo through glucose signaling early in seed development.

    PubMed

    Wang, Lu; Liao, Shengjin; Ruan, Yong-Ling

    2013-01-01

    Seed development depends on coordination among embryo, endosperm and seed coat. Endosperm undergoes nuclear division soon after fertilization, whereas embryo remains quiescent for a while. Such a developmental sequence is of great importance for proper seed development. However, the underlying mechanism remains unclear. Recent results on the cellular domain- and stage-specific expression of invertase genes in cotton and Arabidopsis revealed that cell wall invertase may positively and specifically regulate nuclear division of endosperm after fertilization, thereby playing a role in determining the sequential development of endosperm and embryo, probably through glucose signaling.

  3. Comparative study of lesions created by high-intensity focused ultrasound using sequential discrete and continuous scanning strategies.

    PubMed

    Fan, Tingbo; Liu, Zhenbo; Zhang, Dong; Tang, Mengxing

    2013-03-01

    Lesion formation and temperature distribution induced by high-intensity focused ultrasound (HIFU) were investigated both numerically and experimentally via two energy-delivering strategies, i.e., sequential discrete and continuous scanning modes. Simulations were presented based on the combination of Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation and bioheat equation. Measurements were performed on tissue-mimicking phantoms sonicated by a 1.12-MHz single-element focused transducer working at an acoustic power of 75 W. Both the simulated and experimental results show that, in the sequential discrete mode, obvious saw-tooth-like contours could be observed for the peak temperature distribution and the lesion boundaries, with the increasing interval space between two adjacent exposure points. In the continuous scanning mode, more uniform peak temperature distributions and lesion boundaries would be produced, and the peak temperature values would decrease significantly with the increasing scanning speed. In addition, compared to the sequential discrete mode, the continuous scanning mode could achieve higher treatment efficiency (lesion area generated per second) with a lower peak temperature. The present studies suggest that the peak temperature and tissue lesion resulting from the HIFU exposure could be controlled by adjusting the transducer scanning speed, which is important for improving the HIFU treatment efficiency.

  4. Aging effects in sequential modulations of poorer-strategy effects during execution of memory strategies.

    PubMed

    Hinault, Thomas; Lemaire, Patrick; Touron, Dayna

    2017-02-01

    In this study, we asked young adults and older adults to encode pairs of words. For each item, they were told which strategy to use, interactive imagery or rote repetition. Data revealed poorer-strategy effects in both young adults and older adults: Participants obtained better performance when executing better strategies (i.e., interactive-imagery strategy to encode pairs of concrete words; rote-repetition strategy on pairs of abstract words) than with poorer strategies (i.e., interactive-imagery strategy on pairs of abstract words; rote-repetition strategy on pairs of concrete words). Crucially, we showed that sequential modulations of poorer-strategy effects (i.e., poorer-strategy effects being larger when previous items were encoded with better relative to poorer strategies), previously demonstrated in arithmetic, generalise to memory strategies. We also found reduced sequential modulations of poorer-strategy effects in older adults relative to young adults. Finally, sequential modulations of poorer-strategy effects correlated with measures of cognitive control processes, suggesting that these processes underlie efficient trial-to-trial modulations during strategy execution. Differences in correlations with cognitive control processes were also found between older adults and young adults. These findings have important implications regarding mechanisms underlying memory strategy execution and age differences in memory performance.

  5. Time scale of random sequential adsorption.

    PubMed

    Erban, Radek; Chapman, S Jonathan

    2007-04-01

    A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.

  6. Crystal Growth and Dissolution of Methylammonium Lead Iodide Perovskite in Sequential Deposition: Correlation between Morphology Evolution and Photovoltaic Performance.

    PubMed

    Hsieh, Tsung-Yu; Huang, Chi-Kai; Su, Tzu-Sen; Hong, Cheng-You; Wei, Tzu-Chien

    2017-03-15

    Crystal morphology and structure are important for improving the organic-inorganic lead halide perovskite semiconductor property in optoelectronic, electronic, and photovoltaic devices. In particular, crystal growth and dissolution are two major phenomena in determining the morphology of methylammonium lead iodide perovskite in the sequential deposition method for fabricating a perovskite solar cell. In this report, the effect of immersion time in the second step, i.e., methlyammonium iodide immersion in the morphological, structural, optical, and photovoltaic evolution, is extensively investigated. Supported by experimental evidence, a five-staged, time-dependent evolution of the morphology of methylammonium lead iodide perovskite crystals is established and is well connected to the photovoltaic performance. This result is beneficial for engineering optimal time for methylammonium iodide immersion and converging the solar cell performance in the sequential deposition route. Meanwhile, our result suggests that large, well-faceted methylammonium lead iodide perovskite single crystal may be incubated by solution process. This offers a low cost route for synthesizing perovskite single crystal.

  7. Identifying protein complexes in PPI network using non-cooperative sequential game.

    PubMed

    Maulik, Ujjwal; Basu, Srinka; Ray, Sumanta

    2017-08-21

    Identifying protein complexes from protein-protein interaction (PPI) network is an important and challenging task in computational biology as it helps in better understanding of cellular mechanisms in various organisms. In this paper we propose a noncooperative sequential game based model for protein complex detection from PPI network. The key hypothesis is that protein complex formation is driven by mechanism that eventually optimizes the number of interactions within the complex leading to dense subgraph. The hypothesis is drawn from the observed network property named small world. The proposed multi-player game model translates the hypothesis into the game strategies. The Nash equilibrium of the game corresponds to a network partition where each protein either belong to a complex or form a singleton cluster. We further propose an algorithm to find the Nash equilibrium of the sequential game. The exhaustive experiment on synthetic benchmark and real life yeast networks evaluates the structural as well as biological significance of the network partitions.

  8. Metal distribution and mobility in lateritic soils affected by Cu-Co smelting in the Copperbelt district, Zambia

    NASA Astrophysics Data System (ADS)

    Ettler, Vojtech; Mihaljevic, Martin; Majer, Vladimir; Kribek, Bohdan; Sebek, Ondrej

    2010-05-01

    The copper smelting activities in the Copperbelt mining district (Zambia) left a huge pollution related to the disposal sites of smelting waste (slags) and to the continuous deposition of the smelter stack particulates in the soil systems. We sampled 196 surface and subsurface soils in the vicinity of the Nkana copper smelter at Kitwe and a 110 cm deep lateritic soil profile in order to assess the regional distribution of metallic contaminants and their vertical mobility. The content of contaminants in soil samples were measured by ICP techniques and the lead isotopic compositions (206Pb/207Pb and 208Pb/206Pb ratios) were determined by ICP-MS. The spatial distribution of the major contaminants (Cu, Co, Pb, Zn) indicated the highest contamination NW of the smelter stack corresponding to the direction of prevailing winds in the area. The highest metal concentrations in soils were: 27410 ppm Cu, 606 ppm Co, 480 ppm Pb, 450 ppm Zn. Lead isotopes helped to discriminate the extent of metallic pollution related to the smelter emissions having similar 206Pb/207Pb ratio of 1.17-1.20 in contrast to the regional background value of 1.32. The investigation of the lateritic soil profile sampled in the near vicinity of the Nkana smelter indicated that contamination is mostly located in the uppermost soil horizons enriched in organic matter (< 10 cm). The sequential extraction procedure indicated that up to 33% of Cu and <10% of Co, Pb and Zn was mobile in the profile, being bound in the exchangeable fraction. However, in the deeper parts of the soil profile, metals were mostly bound in reducible fraction, presumably to hydrous ferric oxides. The combination of sequential extraction and lead isotopic determination indicated that the "mobile" fractions of Pb in the soil profile corresponded to the signatures of smelter particulate emissions (206Pb/207Pb = 1.17-1.20), which means that the anthropogenic emissions are the important source of mobile (and potentially bioavailable) metals.

  9. A novel procedure for Rubidium separation and its isotope measurements on geological samples by MC-ICP-MS

    NASA Astrophysics Data System (ADS)

    Ma, J.; Zhang, Z.; Wei, G.; Zhang, L.

    2017-12-01

    A method including a novel column Rb separation procedure and high-precision Rb isotope measurement in geological materials by using multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS) in standard-sample-bracketing (SSB) mode has been developed. Sr-Spec resin was employed, in which the distribution coefficients for Rb, K, Ba and Sr are different in nitric acid, to sequentially separate them from the matrix. The dissolved samples were loaded on the column in 3 M HNO3, the main matrix such as Al, Ca, Fe, Mg, Mn and Na were removed by rinsing with 4.5 mL HNO3, Rb and K were then sequentially eluted by 3 M HNO3 in different volumes. After that, Ba was eluted by 8 M HNO3, and Sr was finally eluted by Milli-Q water. This enable us to collect the pure Rb, K, Ba and Sr one by one with recovery close to 100% for their isotopic compositions measurement on MC-ICP-MS. We here focus on Rb isotope measurement. The measurement using MC-ICP-MS yielded an internal precision for δ87Rb of < ± 0.03‰ (2SE), and the external precision was generally better than ± 0.06‰ (2SD) based on the long-term results of the Rb standard solutions NIST SRM 984. A series of geological rock standards, were analyzed using this method, and the results indicate significant Rb isotope differences in different geologic materials. This will provide a powerful tool to investigate Rb isotope fractionation during geological processes.Based on this method, Rb isotope compositions from a basaltic weathering profile were carried out. The data show the lighter Rb (85Rb) isotope is preferentially leached from the weathering profile and remains heavy Rb isotope (87Rb) in the weathered residues during the incipient weathering stage. From the moderate to advanced weathering stage, the significant variations of Rb isotope were observed and multiple factors, such as leaching, adsorption, desorption, and precipitation, should play important role in fractionating Rb isotope.

  10. Sequential Infection in Ferrets with Antigenically Distinct Seasonal H1N1 Influenza Viruses Boosts Hemagglutinin Stalk-Specific Antibodies

    PubMed Central

    Kirchenbaum, Greg A.; Carter, Donald M.

    2015-01-01

    ABSTRACT Broadly reactive antibodies targeting the conserved hemagglutinin (HA) stalk region are elicited following sequential infection or vaccination with influenza viruses belonging to divergent subtypes and/or expressing antigenically distinct HA globular head domains. Here, we demonstrate, through the use of novel chimeric HA proteins and competitive binding assays, that sequential infection of ferrets with antigenically distinct seasonal H1N1 (sH1N1) influenza virus isolates induced an HA stalk-specific antibody response. Additionally, stalk-specific antibody titers were boosted following sequential infection with antigenically distinct sH1N1 isolates in spite of preexisting, cross-reactive, HA-specific antibody titers. Despite a decline in stalk-specific serum antibody titers, sequential sH1N1 influenza virus-infected ferrets were protected from challenge with a novel H1N1 influenza virus (A/California/07/2009), and these ferrets poorly transmitted the virus to naive contacts. Collectively, these findings indicate that HA stalk-specific antibodies are commonly elicited in ferrets following sequential infection with antigenically distinct sH1N1 influenza virus isolates lacking HA receptor-binding site cross-reactivity and can protect ferrets against a pathogenic novel H1N1 virus. IMPORTANCE The influenza virus hemagglutinin (HA) is a major target of the humoral immune response following infection and/or seasonal vaccination. While antibodies targeting the receptor-binding pocket of HA possess strong neutralization capacities, these antibodies are largely strain specific and do not confer protection against antigenic drift variant or novel HA subtype-expressing viruses. In contrast, antibodies targeting the conserved stalk region of HA exhibit broader reactivity among viruses within and among influenza virus subtypes. Here, we show that sequential infection of ferrets with antigenically distinct seasonal H1N1 influenza viruses boosts the antibody responses directed at the HA stalk region. Moreover, ferrets possessing HA stalk-specific antibody were protected against novel H1N1 virus infection and did not transmit the virus to naive contacts. PMID:26559834

  11. Does the process map influence the outcome of quality improvement work? A comparison of a sequential flow diagram and a hierarchical task analysis diagram.

    PubMed

    Colligan, Lacey; Anderson, Janet E; Potts, Henry W W; Berman, Jonathan

    2010-01-07

    Many quality and safety improvement methods in healthcare rely on a complete and accurate map of the process. Process mapping in healthcare is often achieved using a sequential flow diagram, but there is little guidance available in the literature about the most effective type of process map to use. Moreover there is evidence that the organisation of information in an external representation affects reasoning and decision making. This exploratory study examined whether the type of process map - sequential or hierarchical - affects healthcare practitioners' judgments. A sequential and a hierarchical process map of a community-based anti coagulation clinic were produced based on data obtained from interviews, talk-throughs, attendance at a training session and examination of protocols and policies. Clinic practitioners were asked to specify the parts of the process that they judged to contain quality and safety concerns. The process maps were then shown to them in counter-balanced order and they were asked to circle on the diagrams the parts of the process where they had the greatest quality and safety concerns. A structured interview was then conducted, in which they were asked about various aspects of the diagrams. Quality and safety concerns cited by practitioners differed depending on whether they were or were not looking at a process map, and whether they were looking at a sequential diagram or a hierarchical diagram. More concerns were identified using the hierarchical diagram compared with the sequential diagram and more concerns were identified in relation to clinical work than administrative work. Participants' preference for the sequential or hierarchical diagram depended on the context in which they would be using it. The difficulties of determining the boundaries for the analysis and the granularity required were highlighted. The results indicated that the layout of a process map does influence perceptions of quality and safety problems in a process. In quality improvement work it is important to carefully consider the type of process map to be used and to consider using more than one map to ensure that different aspects of the process are captured.

  12. Enumerative and binomial sequential sampling plans for the multicolored Asian lady beetle (Coleoptera: Coccinellidae) in wine grapes.

    PubMed

    Galvan, T L; Burkness, E C; Hutchison, W D

    2007-06-01

    To develop a practical integrated pest management (IPM) system for the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in wine grapes, we assessed the spatial distribution of H. axyridis and developed eight sampling plans to estimate adult density or infestation level in grape clusters. We used 49 data sets collected from commercial vineyards in 2004 and 2005, in Minnesota and Wisconsin. Enumerative plans were developed using two precision levels (0.10 and 0.25); the six binomial plans reflected six unique action thresholds (3, 7, 12, 18, 22, and 31% of cluster samples infested with at least one H. axyridis). The spatial distribution of H. axyridis in wine grapes was aggregated, independent of cultivar and year, but it was more randomly distributed as mean density declined. The average sample number (ASN) for each sampling plan was determined using resampling software. For research purposes, an enumerative plan with a precision level of 0.10 (SE/X) resulted in a mean ASN of 546 clusters. For IPM applications, the enumerative plan with a precision level of 0.25 resulted in a mean ASN of 180 clusters. In contrast, the binomial plans resulted in much lower ASNs and provided high probabilities of arriving at correct "treat or no-treat" decisions, making these plans more efficient for IPM applications. For a tally threshold of one adult per cluster, the operating characteristic curves for the six action thresholds provided binomial sequential sampling plans with mean ASNs of only 19-26 clusters, and probabilities of making correct decisions between 83 and 96%. The benefits of the binomial sampling plans are discussed within the context of improving IPM programs for wine grapes.

  13. Geochemical phase and particle size relationships of metals in urban road dust.

    PubMed

    Jayarathne, Ayomi; Egodawatta, Prasanna; Ayoko, Godwin A; Goonetilleke, Ashantha

    2017-11-01

    Detailed knowledge of the processes that metals undergo during dry weather periods whilst deposited on urban surfaces and their environmental significance is essential to predict the potential influence of metals on stormwater quality in order to develop appropriate stormwater pollution mitigation measures. However, very limited research has been undertaken in this area. Accordingly, this study investigated the geochemical phase and particle size relationships of seven metals which are commonly associated with urban road dust, using sequential extraction in order to assess their mobility characteristics. Metals in the sequentially extracted fractions of exchangeable, reducible, oxidisable and residual were found to follow a similar trend for different land uses even though they had variable accumulation loads. The high affinity of Cd and Zn for exchangeable reactions in both, bulk and size-fractionated solid samples confirmed their high mobility, while the significant enrichment of Ni and Cr in the stable residual fraction indicated a low risk of mobility. The study results also confirmed the availability of Cu, Pb and Mn in both, stable and mobile fractions. The fine fraction of solids (<150 μm) and antecedent dry days can be highlighted as important parameters when determining the fate of metals associated with urban road dust. The outcomes from this study are expected to contribute to the development of effective stormwater pollution mitigation strategies by taking into consideration the metal-particulate relationships. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Adaptive compressive learning for prediction of protein-protein interactions from primary sequence.

    PubMed

    Zhang, Ya-Nan; Pan, Xiao-Yong; Huang, Yan; Shen, Hong-Bin

    2011-08-21

    Protein-protein interactions (PPIs) play an important role in biological processes. Although much effort has been devoted to the identification of novel PPIs by integrating experimental biological knowledge, there are still many difficulties because of lacking enough protein structural and functional information. It is highly desired to develop methods based only on amino acid sequences for predicting PPIs. However, sequence-based predictors are often struggling with the high-dimensionality causing over-fitting and high computational complexity problems, as well as the redundancy of sequential feature vectors. In this paper, a novel computational approach based on compressed sensing theory is proposed to predict yeast Saccharomyces cerevisiae PPIs from primary sequence and has achieved promising results. The key advantage of the proposed compressed sensing algorithm is that it can compress the original high-dimensional protein sequential feature vector into a much lower but more condensed space taking the sparsity property of the original signal into account. What makes compressed sensing much more attractive in protein sequence analysis is its compressed signal can be reconstructed from far fewer measurements than what is usually considered necessary in traditional Nyquist sampling theory. Experimental results demonstrate that proposed compressed sensing method is powerful for analyzing noisy biological data and reducing redundancy in feature vectors. The proposed method represents a new strategy of dealing with high-dimensional protein discrete model and has great potentiality to be extended to deal with many other complicated biological systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Assessing of distribution, mobility and bioavailability of exogenous Pb in agricultural soils using isotopic labeling method coupled with BCR approach.

    PubMed

    Huang, Zhi-Yong; Xie, Hong; Cao, Ying-Lan; Cai, Chao; Zhang, Zhi

    2014-02-15

    The contamination of Pb in agricultural soils is one of the most important ecological problems, which potentially results in serious health risk on human health through food chain. Hence, the fate of exogenous Pb contaminated in agricultural soils is needed to be deeply explored. By spiking soils with the stable enriched isotopes of (206)Pb, the contamination of exogenous Pb(2+) ions in three agricultural soils sampled from the estuary areas of Jiulong River, China was simulated in the present study, and the distribution, mobility and bioavailability of exogenous Pb in the soils were investigated using the isotopic labeling method coupled with a four-stage BCR (European Community Bureau of Reference) sequential extraction procedure. Results showed that about 60-85% of exogenous Pb was found to distribute in reducible fractions, while the exogenous Pb in acid-extractable fractions was less than 1.0%. After planting, the amounts of exogenous Pb presenting in acid-extractable, reducible and oxidizable fractions in rhizospheric soils decreased by 60-66%, in which partial exogenous Pb was assimilated by plants while most of the metal might transfer downward due to daily watering and applying fertilizer. The results show that the isotopic labeling technique coupled with sequential extraction procedures enables us to explore the distribution, mobility and bioavailability of exogenous Pb contaminated in soils, which may be useful for the further soil remediation. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. When good is stickier than bad: Understanding gain/loss asymmetries in sequential framing effects.

    PubMed

    Sparks, Jehan; Ledgerwood, Alison

    2017-08-01

    Considerable research has demonstrated the power of the current positive or negative frame to shape people's current judgments. But humans must often learn about positive and negative information as they encounter that information sequentially over time. It is therefore crucial to consider the potential importance of sequencing when developing an understanding of how humans think about valenced information. Indeed, recent work looking at sequentially encountered frames suggests that some frames can linger outside the context in which they are first encountered, sticking in the mind so that subsequent frames have a muted effect. The present research builds a comprehensive account of sequential framing effects in both the loss and the gain domains. After seeing information about a potential gain or loss framed in positive terms or negative terms, participants saw the same issue reframed in the opposing way. Across 5 studies and 1566 participants, we find accumulating evidence for the notion that in the gain domain, positive frames are stickier than negative frames for novel but not familiar scenarios, whereas in the loss domain, negative frames are always stickier than positive frames. Integrating regulatory focus theory with the literatures on negativity dominance and positivity offset, we develop a new and comprehensive account of sequential framing effects that emphasizes the adaptive value of positivity and negativity biases in specific contexts. Our findings highlight the fact that research conducted solely in the loss domain risks painting an incomplete and oversimplified picture of human bias and suggest new directions for future research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Solid-phase partitioning of mercury in artisanal gold mine tailings from selected key areas in Mindanao, Philippines, and its implications for mercury detoxification.

    PubMed

    Opiso, Einstine M; Aseneiro, John Paul J; Banda, Marybeth Hope T; Tabelin, Carlito B

    2018-03-01

    The solid-phase partitioning of mercury could provide necessary data in the identification of remediation techniques in contaminated artisanal gold mine tailings. This study was conducted to determine the total mercury content of mine wastes and identify its solid-phase partitioning through selective sequential extraction coupled with cold vapour atomic absorption spectroscopy. Samples from mine tailings and the carbon-in-pulp (CIP) process were obtained from selected key areas in Mindanao, Philippines. The results showed that mercury use is still prevalent among small-scale gold miners in the Philippines. Tailings after ball mill-gravity concentration (W-BM and Li-BM samples) from Mt Diwata and Libona contained high levels of mercury amounting to 25.024 and 6.5 mg kg -1 , respectively. The most prevalent form of mercury in the mine tailings was elemental/amalgamated mercury, followed by water soluble, exchangeable, organic and strongly bound phases, respectively. In contrast, mercury content of carbon-in-pulp residues were significantly lower at only 0.3 and 0.06 mg kg -1 for P-CIP (Del Pilar) and W-CIP (Mt Diwata), respectively. The bulk of mercury in P-CIP samples was partitioned in residual fraction while in W-CIP samples, water soluble mercury predominated. Overall, this study has several important implications with regards to mercury detoxification of contaminated mine tailings from Mindanao, Philippines.

  18. 9 CFR 113.309 - Bovine Parainfluenza3 Vaccine.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... develop antibody titers of 1:32 or greater by day 6 ±2 days post-challenge. (8) A sequential test... parainfluenza, susceptible calves shall be used as test animals (20 vaccinates and five controls). Blood samples... negative at a 1:2 final serum dilution in a varying serum constant virus neutralization test with less than...

  19. 9 CFR 113.309 - Bovine Parainfluenza3 Vaccine.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... develop antibody titers of 1:32 or greater by day 6 ±2 days post-challenge. (8) A sequential test... parainfluenza, susceptible calves shall be used as test animals (20 vaccinates and five controls). Blood samples... negative at a 1:2 final serum dilution in a varying serum constant virus neutralization test with less than...

  20. Effects of an Elementary Strategy on Operations of Exclusion.

    ERIC Educational Resources Information Center

    Lawton, Joseph T.

    Effects of an advance organizer lesson (containing high-order science concepts relating to the law of capillary attraction, and an elementary problem-solving strategy for determining causal relations) were evaluated for a sample of 80 urban 6- and 10-year-old children. Significant sequential transfer effects were established from the lesson.…

Top