Costa, Marilia G; Barbosa, José C; Yamamoto, Pedro T
2007-01-01
The sequential sampling is characterized by using samples of variable sizes, and has the advantage of reducing sampling time and costs if compared to fixed-size sampling. To introduce an adequate management for orthezia, sequential sampling plans were developed for orchards under low and high infestation. Data were collected in Matão, SP, in commercial stands of the orange variety 'Pêra Rio', at five, nine and 15 years of age. Twenty samplings were performed in the whole area of each stand by observing the presence or absence of scales on plants, being plots comprised of ten plants. After observing that in all of the three stands the scale population was distributed according to the contagious model, fitting the Negative Binomial Distribution in most samplings, two sequential sampling plans were constructed according to the Sequential Likelihood Ratio Test (SLRT). To construct these plans an economic threshold of 2% was adopted and the type I and II error probabilities were fixed in alpha = beta = 0.10. Results showed that the maximum numbers of samples expected to determine control need were 172 and 76 samples for stands with low and high infestation, respectively.
Serra, Gerardo V.; Porta, Norma C. La; Avalos, Susana; Mazzuferi, Vilma
2013-01-01
The alfalfa caterpillar, Colias lesbia (Fabricius) (Lepidoptera: Pieridae), is a major pest of alfalfa, Medicago sativa L. (Fabales: Fabaceae), crops in Argentina. Its management is based mainly on chemical control of larvae whenever the larvae exceed the action threshold. To develop and validate fixed-precision sequential sampling plans, an intensive sampling programme for C. lesbia eggs was carried out in two alfalfa plots located in the Province of Córdoba, Argentina, from 1999 to 2002. Using Resampling for Validation of Sampling Plans software, 12 additional independent data sets were used to validate the sequential sampling plan with precision levels of 0.10 and 0.25 (SE/mean), respectively. For a range of mean densities of 0.10 to 8.35 eggs/sample, an average sample size of only 27 and 26 sample units was required to achieve a desired precision level of 0.25 for the sampling plans of Green and Kuno, respectively. As the precision level was increased to 0.10, average sample size increased to 161 and 157 sample units for the sampling plans of Green and Kuno, respectively. We recommend using Green's sequential sampling plan because it is less sensitive to changes in egg density. These sampling plans are a valuable tool for researchers to study population dynamics and to evaluate integrated pest management strategies. PMID:23909840
Burkness, Eric C; Hutchison, W D
2009-10-01
Populations of cabbage looper, Trichoplusiani (Lepidoptera: Noctuidae), were sampled in experimental plots and commercial fields of cabbage (Brasicca spp.) in Minnesota during 1998-1999 as part of a larger effort to implement an integrated pest management program. Using a resampling approach and the Wald's sequential probability ratio test, sampling plans with different sampling parameters were evaluated using independent presence/absence and enumerative data. Evaluations and comparisons of the different sampling plans were made based on the operating characteristic and average sample number functions generated for each plan and through the use of a decision probability matrix. Values for upper and lower decision boundaries, sequential error rates (alpha, beta), and tally threshold were modified to determine parameter influence on the operating characteristic and average sample number functions. The following parameters resulted in the most desirable operating characteristic and average sample number functions; action threshold of 0.1 proportion of plants infested, tally threshold of 1, alpha = beta = 0.1, upper boundary of 0.15, lower boundary of 0.05, and resampling with replacement. We found that sampling parameters can be modified and evaluated using resampling software to achieve desirable operating characteristic and average sample number functions. Moreover, management of T. ni by using binomial sequential sampling should provide a good balance between cost and reliability by minimizing sample size and maintaining a high level of correct decisions (>95%) to treat or not treat.
Rigamonti, Ivo E; Brambilla, Carla; Colleoni, Emanuele; Jermini, Mauro; Trivellone, Valeria; Baumgärtner, Johann
2016-04-01
The paper deals with the study of the spatial distribution and the design of sampling plans for estimating nymph densities of the grape leafhopper Scaphoideus titanus Ball in vine plant canopies. In a reference vineyard sampled for model parameterization, leaf samples were repeatedly taken according to a multistage, stratified, random sampling procedure, and data were subjected to an ANOVA. There were no significant differences in density neither among the strata within the vineyard nor between the two strata with basal and apical leaves. The significant differences between densities on trunk and productive shoots led to the adoption of two-stage (leaves and plants) and three-stage (leaves, shoots, and plants) sampling plans for trunk shoots- and productive shoots-inhabiting individuals, respectively. The mean crowding to mean relationship used to analyze the nymphs spatial distribution revealed aggregated distributions. In both the enumerative and the sequential enumerative sampling plans, the number of leaves of trunk shoots, and of leaves and shoots of productive shoots, was kept constant while the number of plants varied. In additional vineyards data were collected and used to test the applicability of the distribution model and the sampling plans. The tests confirmed the applicability 1) of the mean crowding to mean regression model on the plant and leaf stages for representing trunk shoot-inhabiting distributions, and on the plant, shoot, and leaf stages for productive shoot-inhabiting nymphs, 2) of the enumerative sampling plan, and 3) of the sequential enumerative sampling plan. In general, sequential enumerative sampling was more cost efficient than enumerative sampling.
Harold R. Offord
1966-01-01
Sequential sampling based on a negative binomial distribution of ribes populations required less than half the time taken by regular systematic line transect sampling in a comparison test. It gave the same control decision as the regular method in 9 of 13 field trials. A computer program that permits sequential plans to be built readily for other white pine regions is...
Sequential Sampling Plan of Anthonomus grandis (Coleoptera: Curculionidae) in Cotton Plants.
Grigolli, J F J; Souza, L A; Mota, T A; Fernandes, M G; Busoli, A C
2017-04-01
The boll weevil, Anthonomus grandis grandis Boheman (Coleoptera: Curculionidae), is one of the most important pests of cotton production worldwide. The objective of this work was to develop a sequential sampling plan for the boll weevil. The studies were conducted in Maracaju, MS, Brazil, in two seasons with cotton cultivar FM 993. A 10,000-m2 area of cotton was subdivided into 100 of 10- by 10-m plots, and five plants per plot were evaluated weekly, recording the number of squares with feeding + oviposition punctures of A. grandis in each plant. A sequential sampling plan by the maximum likelihood ratio test was developed, using a 10% threshold level of squares attacked. A 5% security level was adopted for the elaboration of the sequential sampling plan. The type I and type II error used was 0.05, recommended for studies with insects. The adjustment of the frequency distributions used were divided into two phases, so that the model that best fit to the data was the negative binomial distribution up to 85 DAE (Phase I), and from there the best fit was Poisson distribution (Phase II). The equations that define the decision-making for Phase I are S0 = -5.1743 + 0.5730N and S1 = 5.1743 + 0.5730N, and for the Phase II are S0 = -4.2479 + 0.5771N and S1 = 4.2479 + 0.5771N. The sequential sampling plan developed indicated the maximum number of sample units expected for decision-making is ∼39 and 31 samples for Phases I and II, respectively. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
McGraw, Benjamin A; Koppenhöfer, Albrecht M
2009-06-01
Binomial sequential sampling plans were developed to forecast weevil Listronotus maculicollis Kirby (Coleoptera: Curculionidae), larval damage to golf course turfgrass and aid in the development of integrated pest management programs for the weevil. Populations of emerging overwintered adults were sampled over a 2-yr period to determine the relationship between adult counts, larval density, and turfgrass damage. Larval density and composition of preferred host plants (Poa annua L.) significantly affected the expression of turfgrass damage. Multiple regression indicates that damage may occur in moderately mixed P. annua stands with as few as 10 larvae per 0.09 m2. However, > 150 larvae were required before damage became apparent in pure Agrostis stolonifera L. plots. Adult counts during peaks in emergence as well as cumulative counts across the emergence period were significantly correlated to future densities of larvae. Eight binomial sequential sampling plans based on two tally thresholds for classifying infestation (T = 1 and two adults) and four adult density thresholds (0.5, 0.85, 1.15, and 1.35 per 3.34 m2) were developed to forecast the likelihood of turfgrass damage by using adult counts during peak emergence. Resampling for validation of sample plans software was used to validate sampling plans with field-collected data sets. All sampling plans were found to deliver accurate classifications (correct decisions were made between 84.4 and 96.8%) in a practical timeframe (average sampling cost < 22.7 min).
Cocco, Arturo; Serra, Giuseppe; Lentini, Andrea; Deliperi, Salvatore; Delrio, Gavino
2015-09-01
The within- and between-plant distribution of the tomato leafminer, Tuta absoluta (Meyrick), was investigated in order to define action thresholds based on leaf infestation and to propose enumerative and binomial sequential sampling plans for pest management applications in protected crops. The pest spatial distribution was aggregated between plants, and median leaves were the most suitable sample to evaluate the pest density. Action thresholds of 36 and 48%, 43 and 56% and 60 and 73% infested leaves, corresponding to economic thresholds of 1 and 3% damaged fruits, were defined for tomato cultivars with big, medium and small fruits respectively. Green's method was a more suitable enumerative sampling plan as it required a lower sampling effort. Binomial sampling plans needed lower average sample sizes than enumerative plans to make a treatment decision, with probabilities of error of <0.10. The enumerative sampling plan required 87 or 343 leaves to estimate the population density in extensive or intensive ecological studies respectively. Binomial plans would be more practical and efficient for control purposes, needing average sample sizes of 17, 20 and 14 leaves to take a pest management decision in order to avoid fruit damage higher than 1% in cultivars with big, medium and small fruits respectively. © 2014 Society of Chemical Industry.
Shahbi, M; Rajabpour, A
2017-08-01
Phthorimaea operculella Zeller is an important pest of potato in Iran. Spatial distribution and fixed-precision sequential sampling for population estimation of the pest on two potato cultivars, Arinda ® and Sante ® , were studied in two separate potato fields during two growing seasons (2013-2014 and 2014-2015). Spatial distribution was investigated by Taylor's power law and Iwao's patchiness. Results showed that the spatial distribution of eggs and larvae was random. In contrast to Iwao's patchiness, Taylor's power law provided a highly significant relationship between variance and mean density. Therefore, fixed-precision sequential sampling plan was developed by Green's model at two precision levels of 0.25 and 0.1. The optimum sample size on Arinda ® and Sante ® cultivars at precision level of 0.25 ranged from 151 to 813 and 149 to 802 leaves, respectively. At 0.1 precision level, the sample sizes varied from 5083 to 1054 and 5100 to 1050 leaves for Arinda ® and Sante ® cultivars, respectively. Therefore, the optimum sample sizes for the cultivars, with different resistance levels, were not significantly different. According to the calculated stop lines, the sampling must be continued until cumulative number of eggs + larvae reached to 15-16 or 96-101 individuals at precision levels of 0.25 or 0.1, respectively. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans software. The sampling plant provided in this study can be used to obtain a rapid estimate of the pest density with minimal effort.
Development of sampling plans for cotton bolls injured by stink bugs (Hemiptera: Pentatomidae).
Reay-Jones, F P F; Toews, M D; Greene, J K; Reeves, R B
2010-04-01
Cotton, Gossypium hirsutum L., bolls were sampled in commercial fields for stink bug (Hemiptera: Pentatomidae) injury during 2007 and 2008 in South Carolina and Georgia. Across both years of this study, boll-injury percentages averaged 14.8 +/- 0.3 (SEM). At average boll injury treatment levels of 10, 20, 30, and 50%, the percentage of samples with at least one injured boll was 82, 97, 100, and 100%, respectively. Percentage of field-sampling date combinations with average injury < 10, 20, 30, and 50% was 35, 80, 95, and 99%, respectively. At the average of 14.8% boll injury or 2.9 injured bolls per 20-boll sample, 112 samples at Dx = 0.1 (within 10% of the mean) were required for population estimation, compared with only 15 samples at Dx = 0.3. Using a sample size of 20 bolls, our study indicated that, at the 10% threshold and alpha = beta = 0.2 (with 80% confidence), control was not needed when <1.03 bolls were injured. The sampling plan required continued sampling for a range of 1.03-3.8 injured bolls per 20-boll sample. Only when injury was > 3.8 injured bolls per 20-boll sample was a control measure needed. Sequential sampling plans were also determined for thresholds of 20, 30, and 50% injured bolls. Sample sizes for sequential sampling plans were significantly reduced when compared with a fixed sampling plan (n=10) for all thresholds and error rates.
Parajulee, M N; Shrestha, R B; Leser, J F
2006-04-01
A 2-yr field study was conducted to examine the effectiveness of two sampling methods (visual and plant washing techniques) for western flower thrips, Frankliniella occidentalis (Pergande), and five sampling methods (visual, beat bucket, drop cloth, sweep net, and vacuum) for cotton fleahopper, Pseudatomoscelis seriatus (Reuter), in Texas cotton, Gossypium hirsutum (L.), and to develop sequential sampling plans for each pest. The plant washing technique gave similar results to the visual method in detecting adult thrips, but the washing technique detected significantly higher number of thrips larvae compared with the visual sampling. Visual sampling detected the highest number of fleahoppers followed by beat bucket, drop cloth, vacuum, and sweep net sampling, with no significant difference in catch efficiency between vacuum and sweep net methods. However, based on fixed precision cost reliability, the sweep net sampling was the most cost-effective method followed by vacuum, beat bucket, drop cloth, and visual sampling. Taylor's Power Law analysis revealed that the field dispersion patterns of both thrips and fleahoppers were aggregated throughout the crop growing season. For thrips management decision based on visual sampling (0.25 precision), 15 plants were estimated to be the minimum sample size when the estimated population density was one thrips per plant, whereas the minimum sample size was nine plants when thrips density approached 10 thrips per plant. The minimum visual sample size for cotton fleahoppers was 16 plants when the density was one fleahopper per plant, but the sample size decreased rapidly with an increase in fleahopper density, requiring only four plants to be sampled when the density was 10 fleahoppers per plant. Sequential sampling plans were developed and validated with independent data for both thrips and cotton fleahoppers.
Clarke-Harris, Dionne; Fleischer, Shelby J
2003-06-01
Although vegetable amaranth, Amaranthus viridis L. and A. dubius Mart. ex Thell., production and economic importance is increasing in diversified peri-urban farms in Jamaica, lepidopteran herbivory is common even during weekly pyrethroid applications. We developed and validated a sampling plan, and investigated insecticides with new modes of action, for a complex of five species (Pyralidae: Spoladea recurvalis (F.), Herpetogramma bipunctalis (F.), Noctuidae: Spodoptera exigua (Hubner), S. frugiperda (J. E. Smith), and S. eridania Stoll). Significant within-plant variation occurred with H. bipunctalis, and a six-leaf sample unit including leaves from the inner and outer whorl was selected to sample all species. Larval counts best fit a negative binomial distribution. We developed a sequential sampling plan using a threshold of one larva per sample unit and the fitted distribution with a k(c) of 0.645. When compared with a fixed plan of 25 plants, sequential sampling recommended the same management decision on 87.5%, additional samples on 9.4%, and gave inaccurate recommendations on 3.1% of 32 farms, while reducing sample size by 46%. Insecticide frequency was reduced 33-60% when management decisions were based on sampled data compared with grower-standards, with no effect on crop damage. Damage remained high or variable (10-46%) with pyrethroid applications. Lepidopteran control was dramatically improved with ecdysone agonists (tebufenozide) or microbial metabolites (spinosyns and emamectin benzoate). This work facilitates resistance management efforts concurrent with the introduction of newer modes of action for lepidopteran control in leafy vegetable production in the Caribbean.
Galvan, T L; Burkness, E C; Hutchison, W D
2007-06-01
To develop a practical integrated pest management (IPM) system for the multicolored Asian lady beetle, Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae), in wine grapes, we assessed the spatial distribution of H. axyridis and developed eight sampling plans to estimate adult density or infestation level in grape clusters. We used 49 data sets collected from commercial vineyards in 2004 and 2005, in Minnesota and Wisconsin. Enumerative plans were developed using two precision levels (0.10 and 0.25); the six binomial plans reflected six unique action thresholds (3, 7, 12, 18, 22, and 31% of cluster samples infested with at least one H. axyridis). The spatial distribution of H. axyridis in wine grapes was aggregated, independent of cultivar and year, but it was more randomly distributed as mean density declined. The average sample number (ASN) for each sampling plan was determined using resampling software. For research purposes, an enumerative plan with a precision level of 0.10 (SE/X) resulted in a mean ASN of 546 clusters. For IPM applications, the enumerative plan with a precision level of 0.25 resulted in a mean ASN of 180 clusters. In contrast, the binomial plans resulted in much lower ASNs and provided high probabilities of arriving at correct "treat or no-treat" decisions, making these plans more efficient for IPM applications. For a tally threshold of one adult per cluster, the operating characteristic curves for the six action thresholds provided binomial sequential sampling plans with mean ASNs of only 19-26 clusters, and probabilities of making correct decisions between 83 and 96%. The benefits of the binomial sampling plans are discussed within the context of improving IPM programs for wine grapes.
Paula-Moraes, S; Burkness, E C; Hunt, T E; Wright, R J; Hein, G L; Hutchison, W D
2011-12-01
Striacosta albicosta (Smith) (Lepidoptera: Noctuidae), is a native pest of dry beans (Phaseolus vulgaris L.) and corn (Zea mays L.). As a result of larval feeding damage on corn ears, S. albicosta has a narrow treatment window; thus, early detection of the pest in the field is essential, and egg mass sampling has become a popular monitoring tool. Three action thresholds for field and sweet corn currently are used by crop consultants, including 4% of plants infested with egg masses on sweet corn in the silking-tasseling stage, 8% of plants infested with egg masses on field corn with approximately 95% tasseled, and 20% of plants infested with egg masses on field corn during mid-milk-stage corn. The current monitoring recommendation is to sample 20 plants at each of five locations per field (100 plants total). In an effort to develop a more cost-effective sampling plan for S. albicosta egg masses, several alternative binomial sampling plans were developed using Wald's sequential probability ratio test, and validated using Resampling for Validation of Sampling Plans (RVSP) software. The benefit-cost ratio also was calculated and used to determine the final selection of sampling plans. Based on final sampling plans selected for each action threshold, the average sample number required to reach a treat or no-treat decision ranged from 38 to 41 plants per field. This represents a significant savings in sampling cost over the current recommendation of 100 plants.
Plane-Based Sampling for Ray Casting Algorithm in Sequential Medical Images
Lin, Lili; Chen, Shengyong; Shao, Yan; Gu, Zichun
2013-01-01
This paper proposes a plane-based sampling method to improve the traditional Ray Casting Algorithm (RCA) for the fast reconstruction of a three-dimensional biomedical model from sequential images. In the novel method, the optical properties of all sampling points depend on the intersection points when a ray travels through an equidistant parallel plan cluster of the volume dataset. The results show that the method improves the rendering speed at over three times compared with the conventional algorithm and the image quality is well guaranteed. PMID:23424608
Carleton, R. Drew; Heard, Stephen B.; Silk, Peter J.
2013-01-01
Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with “pre-sampling” data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex) attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n∼100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand) was the most efficient, with sample means converging on true mean density for sample sizes of n∼25–40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods. PMID:24376556
Fixed precision sampling plans for white apple leafhopper (Homoptera: Cicadellidae) on apple.
Beers, Elizabeth H; Jones, Vincent P
2004-10-01
Constant precision sampling plans for the white apple leafhopper, Typhlocyba pomaria McAtee, were developed so that it could be used as an indicator species for system stability as new integrated pest management programs without broad-spectrum pesticides are developed. Taylor's power law was used to model the relationship between the mean and the variance, and Green's constant precision sequential sample equation was used to develop sampling plans. Bootstrap simulations of the sampling plans showed greater precision (D = 0.25) than the desired precision (Do = 0.3), particularly at low mean population densities. We found that by adjusting the Do value in Green's equation to 0.4, we were able to reduce the average sample number by 25% and provided an average D = 0.31. The sampling plan described allows T. pomaria to be used as reasonable indicator species of agroecosystem stability in Washington apple orchards.
Increasing efficiency of preclinical research by group sequential designs
Piper, Sophie K.; Rex, Andre; Florez-Vargas, Oscar; Karystianis, George; Schneider, Alice; Wellwood, Ian; Siegerink, Bob; Ioannidis, John P. A.; Kimmelman, Jonathan; Dirnagl, Ulrich
2017-01-01
Despite the potential benefits of sequential designs, studies evaluating treatments or experimental manipulations in preclinical experimental biomedicine almost exclusively use classical block designs. Our aim with this article is to bring the existing methodology of group sequential designs to the attention of researchers in the preclinical field and to clearly illustrate its potential utility. Group sequential designs can offer higher efficiency than traditional methods and are increasingly used in clinical trials. Using simulation of data, we demonstrate that group sequential designs have the potential to improve the efficiency of experimental studies, even when sample sizes are very small, as is currently prevalent in preclinical experimental biomedicine. When simulating data with a large effect size of d = 1 and a sample size of n = 18 per group, sequential frequentist analysis consumes in the long run only around 80% of the planned number of experimental units. In larger trials (n = 36 per group), additional stopping rules for futility lead to the saving of resources of up to 30% compared to block designs. We argue that these savings should be invested to increase sample sizes and hence power, since the currently underpowered experiments in preclinical biomedicine are a major threat to the value and predictiveness in this research domain. PMID:28282371
ERIC Educational Resources Information Center
Rhee, Eunjeong; Lee, Bo Hyun; Kim, Boyoung; Ha, Gyuyoung; Lee, Sang Min
2016-01-01
The current study investigated how the five components of planned happenstance skills are related to vocational identity statuses. For determination of relationships, cluster and discriminant analyses were conducted sequentially on a sample of 515 university students in South Korea. Cluster analysis revealed vocational identity statuses to be…
Tran, Anh K; Koch, Robert L
2017-06-01
The soybean aphid, Aphis glycines Matsumura, is an economically important soybean pest. Many studies have demonstrated that predatory insects are important in suppressing A. glycines population growth. However, to improve the utilization of predators in A. glycines management, sampling plans need to be developed and validated for predators. Aphid predators were sampled in soybean fields near Rosemount, Minnesota, from 2006-2007 and 2013-2015 with sample sizes of 20-80 plants. Sampling plans were developed for Orius insidiosus (Say), Harmonia axyridis (Pallas), and all aphidophagous Coccinellidae species combined. Taylor's power law parameters from the regression of log variance versus log mean suggested aggregated spatial patterns for immature and adult stages combined for O. insidiosus, H. axyridis, and Coccinellidae in soybean fields. Using the parameters from Taylor's power law and Green's method, sequential fixed-precision sampling plans were developed to estimate the density for each predator taxon at desired precision levels of 0.10 and 0.25. To achieve a desired precision of 0.10 and 0.25, the average sample number (ASN) ranged from 398-713 and 64-108 soybean plants, respectively, for all species. Resulting ASNs were relatively large and assumed impractical for most purposes; therefore, the desired precision levels were adjusted to determine the level of precision associated with a more practical ASN. Final analysis indicated an ASN of 38 soybean plants provided precision of 0.32-0.40 for the predators. Development of sampling plans should provide guidance for improved estimation of predator densities for A. glycines pest management programs and for research purposes. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Flexible sequential designs for multi-arm clinical trials.
Magirr, D; Stallard, N; Jaki, T
2014-08-30
Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Jose F. Negron; Willis C. Schaupp; Erik Johnson
2000-01-01
The Douglas-fir beetle, Dendroctonus pseudotsugae Hopkins, attacks Douglas-fir, Pseudotsuga menziesii (Mirb.) Franco (Pinaceae), throughout western North America. Periodic outbreaks cause increased mortality of its host. Land managers and forest health specialists often need to determine population trends of this insect. Bark samples were obtained from 326 trees...
Lara, Jesus R; Hoddle, Mark S
2015-08-01
Oligonychus perseae Tuttle, Baker, & Abatiello is a foliar pest of 'Hass' avocados [Persea americana Miller (Lauraceae)]. The recommended action threshold is 50-100 motile mites per leaf, but this count range and other ecological factors associated with O. perseae infestations limit the application of enumerative sampling plans in the field. Consequently, a comprehensive modeling approach was implemented to compare the practical application of various binomial sampling models for decision-making of O. perseae in California. An initial set of sequential binomial sampling models were developed using three mean-proportion modeling techniques (i.e., Taylor's power law, maximum likelihood, and an empirical model) in combination with two-leaf infestation tally thresholds of either one or two mites. Model performance was evaluated using a robust mite count database consisting of >20,000 Hass avocado leaves infested with varying densities of O. perseae and collected from multiple locations. Operating characteristic and average sample number results for sequential binomial models were used as the basis to develop and validate a standardized fixed-size binomial sampling model with guidelines on sample tree and leaf selection within blocks of avocado trees. This final validated model requires a leaf sampling cost of 30 leaves and takes into account the spatial dynamics of O. perseae to make reliable mite density classifications for a 50-mite action threshold. Recommendations for implementing this fixed-size binomial sampling plan to assess densities of O. perseae in commercial California avocado orchards are discussed. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Bansal, A; Kapoor, R; Singh, S K; Kumar, N; Oinam, A S; Sharma, S C
2012-07-01
DOSIMETERIC AND RADIOBIOLOGICAL COMPARISON OF TWO RADIATION SCHEDULES IN LOCALIZED CARCINOMA PROSTATE: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose-volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT.
Bansal, A.; Kapoor, R.; Singh, S. K.; Kumar, N.; Oinam, A. S.; Sharma, S. C.
2012-01-01
Aims: Dosimeteric and radiobiological comparison of two radiation schedules in localized carcinoma prostate: Standard Three-Dimensional Conformal Radiotherapy (3DCRT) followed by Intensity Modulated Radiotherapy (IMRT) boost (sequential-IMRT) with Simultaneous Integrated Boost IMRT (SIB-IMRT). Material and Methods: Thirty patients were enrolled. In all, the target consisted of PTV P + SV (Prostate and seminal vesicles) and PTV LN (lymph nodes) where PTV refers to planning target volume and the critical structures included: bladder, rectum and small bowel. All patients were treated with sequential-IMRT plan, but for dosimetric comparison, SIB-IMRT plan was also created. The prescription dose to PTV P + SV was 74 Gy in both strategies but with different dose per fraction, however, the dose to PTV LN was 50 Gy delivered in 25 fractions over 5 weeks for sequential-IMRT and 54 Gy delivered in 27 fractions over 5.5 weeks for SIB-IMRT. The treatment plans were compared in terms of dose–volume histograms. Also, Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP) obtained with the two plans were compared. Results: The volume of rectum receiving 70 Gy or more (V > 70 Gy) was reduced to 18.23% with SIB-IMRT from 22.81% with sequential-IMRT. SIB-IMRT reduced the mean doses to both bladder and rectum by 13% and 17%, respectively, as compared to sequential-IMRT. NTCP of 0.86 ± 0.75% and 0.01 ± 0.02% for the bladder, 5.87 ± 2.58% and 4.31 ± 2.61% for the rectum and 8.83 ± 7.08% and 8.25 ± 7.98% for the bowel was seen with sequential-IMRT and SIB-IMRT plans respectively. Conclusions: For equal PTV coverage, SIB-IMRT markedly reduced doses to critical structures, therefore should be considered as the strategy for dose escalation. SIB-IMRT achieves lesser NTCP than sequential-IMRT. PMID:23204659
An Aid for Planning Programs in Career Education.
ERIC Educational Resources Information Center
Illinois State Board of Vocational Education and Rehabilitation, Springfield. Div. of Vocational and Technical Education.
Offered as an aid for developing sequential occupational education programs, the publication presents a concept in career education planning beginning with kindergarten and continuing through adult years. Career education goals are defined, and steps in planning sequential programs are outlined as follows: (1) organization of the occupational…
A procedure for forecasting western larch seed crops
Arthur L. Roe
1966-01-01
Successful regeneration depends upon good coordination between seed production and seedbed preparation. To aid forest managers in scheduling seedbed preparation, a simple sequential sampling plan for estimating potential cone crops as much as a year in advance of the seed fall was developed and is described herein. With advance knowledge of the cone crop prospects, the...
Analyses of group sequential clinical trials.
Koepcke, W
1989-12-01
In the first part of this article the methodology of group sequential plans is reviewed. After introducing the basic definition of such plans the main properties are shown. At the end of this section three different plans (Pocock, O'Brien-Fleming, Koepcke) are compared. In the second part of the article some unresolved issues and recent developments in the application of group sequential methods to long-term controlled clinical trials are discussed. These include deviation from the assumptions, life table methods, multiple-arm clinical trials, multiple outcome measures, and confidence intervals.
van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels
2012-01-01
This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use RACE/A to model data from two variants of a picture-word interference task in a psychological refractory period design. These models will demonstrate how RACE/A enables interactions between sequential sampling and long-term declarative learning, and between sequential sampling and task control. In a traditional sequential sampling model, the onset of the process within the task is unclear, as is the number of sampling processes. RACE/A provides a theoretical basis for estimating the onset of sequential sampling processes during task execution and allows for easy modeling of multiple sequential sampling processes within a task. Copyright © 2011 Cognitive Science Society, Inc.
Lessio, Federico; Alma, Alberto
2006-04-01
The spatial distribution of the nymphs of Scaphoideus titanus Ball (Homoptera Cicadellidae), the vector of grapevine flavescence dorée (Candidatus Phytoplasma vitis, 16Sr-V), was studied by applying Taylor's power law. Studies were conducted from 2002 to 2005, in organic and conventional vineyards of Piedmont, northern Italy. Minimum sample size and fixed precision level stop lines were calculated to develop appropriate sampling plans. Model validation was performed, using independent field data, by means of Resampling Validation of Sample Plans (RVSP) resampling software. The nymphal distribution, analyzed via Taylor's power law, was aggregated, with b = 1.49. A sample of 32 plants was adequate at low pest densities with a precision level of D0 = 0.30; but for a more accurate estimate (D0 = 0.10), the required sample size needs to be 292 plants. Green's fixed precision level stop lines seem to be more suitable for field sampling: RVSP simulations of this sampling plan showed precision levels very close to the desired levels. However, at a prefixed precision level of 0.10, sampling would become too time-consuming, whereas a precision level of 0.25 is easily achievable. How these results could influence the correct application of the compulsory control of S. titanus and Flavescence dorée in Italy is discussed.
Paech, Juliane; Lippke, Sonia
2017-01-01
Recommendations for physical activity and for fruit and vegetable intake are often not translated into action due to deficits in self-regulatory strategies. The present study examines the interplay of intention, intergoal facilitation, action and coping planning and self-regulation in facilitating physical activity and healthy nutrition. In an online study, intentions and behaviours were assessed at baseline, intergoal facilitation and planning at 4-week follow-up, self-regulation, physical activity and nutrition at 6-month follow-up in a non-clinical sample. The final sample (n = 711) consisted of 27.2% men, the age ranged from 16 to 78 years. Sequential mediations were tested. Intergoal facilitation, planning and self-regulation mediated the link from intention to physical activity and nutrition; the specific indirect effects were significant. Findings suggest that intergoal facilitation and self-regulation can facilitate behaviour change, in addition to planning. Cross-behavioural mechanisms might facilitate lifestyle change in several domains.
Performance review using sequential sampling and a practice computer.
Difford, F
1988-06-01
The use of sequential sample analysis for repeated performance review is described with examples from several areas of practice. The value of a practice computer in providing a random sample from a complete population, evaluating the parameters of a sequential procedure, and producing a structured worksheet is discussed. It is suggested that sequential analysis has advantages over conventional sampling in the area of performance review in general practice.
Pritchett, Yili; Jemiai, Yannis; Chang, Yuchiao; Bhan, Ishir; Agarwal, Rajiv; Zoccali, Carmine; Wanner, Christoph; Lloyd-Jones, Donald; Cannata-Andía, Jorge B; Thompson, Taylor; Appelbaum, Evan; Audhya, Paul; Andress, Dennis; Zhang, Wuyan; Solomon, Scott; Manning, Warren J; Thadhani, Ravi
2011-04-01
Chronic kidney disease is associated with a marked increase in risk for left ventricular hypertrophy and cardiovascular mortality compared with the general population. Therapy with vitamin D receptor activators has been linked with reduced mortality in chronic kidney disease and an improvement in left ventricular hypertrophy in animal studies. PRIMO (Paricalcitol capsules benefits in Renal failure Induced cardia MOrbidity) is a multinational, multicenter randomized controlled trial to assess the effects of paricalcitol (a selective vitamin D receptor activator) on mild to moderate left ventricular hypertrophy in patients with chronic kidney disease. Subjects with mild-moderate chronic kidney disease are randomized to paricalcitol or placebo after confirming left ventricular hypertrophy using a cardiac echocardiogram. Cardiac magnetic resonance imaging is then used to assess left ventricular mass index at baseline, 24 and 48 weeks, which is the primary efficacy endpoint of the study. Because of limited prior data to estimate sample size, a maximum information group sequential design with sample size re-estimation is implemented to allow sample size adjustment based on the nuisance parameter estimated using the interim data. An interim efficacy analysis is planned at a pre-specified time point conditioned on the status of enrollment. The decision to increase sample size depends on the observed treatment effect. A repeated measures analysis model, using available data at Week 24 and 48 with a backup model of an ANCOVA analyzing change from baseline to the final nonmissing observation, are pre-specified to evaluate the treatment effect. Gamma-family of spending function is employed to control family-wise Type I error rate as stopping for success is planned in the interim efficacy analysis. If enrollment is slower than anticipated, the smaller sample size used in the interim efficacy analysis and the greater percent of missing week 48 data might decrease the parameter estimation accuracy, either for the nuisance parameter or for the treatment effect, which might in turn affect the interim decision-making. The application of combining a group sequential design with a sample-size re-estimation in clinical trial design has the potential to improve efficiency and to increase the probability of trial success while ensuring integrity of the study.
Fitts, Douglas A
2017-09-21
The variable criteria sequential stopping rule (vcSSR) is an efficient way to add sample size to planned ANOVA tests while holding the observed rate of Type I errors, α o , constant. The only difference from regular null hypothesis testing is that criteria for stopping the experiment are obtained from a table based on the desired power, rate of Type I errors, and beginning sample size. The vcSSR was developed using between-subjects ANOVAs, but it should work with p values from any type of F test. In the present study, the α o remained constant at the nominal level when using the previously published table of criteria with repeated measures designs with various numbers of treatments per subject, Type I error rates, values of ρ, and four different sample size models. New power curves allow researchers to select the optimal sample size model for a repeated measures experiment. The criteria held α o constant either when used with a multiple correlation that varied the sample size model and the number of predictor variables, or when used with MANOVA with multiple groups and two levels of a within-subject variable at various levels of ρ. Although not recommended for use with χ 2 tests such as the Friedman rank ANOVA test, the vcSSR produces predictable results based on the relation between F and χ 2 . Together, the data confirm the view that the vcSSR can be used to control Type I errors during sequential sampling with any t- or F-statistic rather than being restricted to certain ANOVA designs.
An exploratory sequential design to validate measures of moral emotions.
Márquez, Margarita G; Delgado, Ana R
2017-05-01
This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.
Multi-point objective-oriented sequential sampling strategy for constrained robust design
NASA Astrophysics Data System (ADS)
Zhu, Ping; Zhang, Siliang; Chen, Wei
2015-03-01
Metamodelling techniques are widely used to approximate system responses of expensive simulation models. In association with the use of metamodels, objective-oriented sequential sampling methods have been demonstrated to be effective in balancing the need for searching an optimal solution versus reducing the metamodelling uncertainty. However, existing infilling criteria are developed for deterministic problems and restricted to one sampling point in one iteration. To exploit the use of multiple samples and identify the true robust solution in fewer iterations, a multi-point objective-oriented sequential sampling strategy is proposed for constrained robust design problems. In this article, earlier development of objective-oriented sequential sampling strategy for unconstrained robust design is first extended to constrained problems. Next, a double-loop multi-point sequential sampling strategy is developed. The proposed methods are validated using two mathematical examples followed by a highly nonlinear automotive crashworthiness design example. The results show that the proposed method can mitigate the effect of both metamodelling uncertainty and design uncertainty, and identify the robust design solution more efficiently than the single-point sequential sampling approach.
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
Kafeshani, Farzaneh Alizadeh; Rajabpour, Ali; Aghajanzadeh, Sirous; Gholamian, Esmaeil; Farkhari, Mohammad
2018-04-02
Aphis spiraecola Patch, Aphis gossypii Glover, and Toxoptera aurantii Boyer de Fonscolombe are three important aphid pests of citrus orchards. In this study, spatial distributions of the aphids on two orange species, Satsuma mandarin and Thomson navel, were evaluated using Taylor's power law and Iwao's patchiness. In addition, a fixed-precision sequential sampling plant was developed for each species on the host plant by Green's model at precision levels of 0.25 and 0.1. The results revealed that spatial distribution parameters and therefore the sampling plan were significantly different according to aphid and host plant species. Taylor's power law provides a better fit for the data than Iwao's patchiness regression. Except T. aurantii on Thomson navel orange, spatial distribution patterns of the aphids were aggregative on both citrus. T. aurantii had regular dispersion pattern on Thomson navel orange. Optimum sample size of the aphids varied from 30-2061 and 1-1622 shoots on Satsuma mandarin and Thomson navel orange based on aphid species and desired precision level. Calculated stop lines of the aphid species on Satsuma mandarin and Thomson navel orange ranged from 0.48 to 19 and 0.19 to 80.4 aphids per 24 shoots according to aphid species and desired precision level. The performance of the sampling plan was validated by resampling analysis using resampling for validation of sampling plans (RVSP) software. This sampling program is useful for IPM program of the aphids in citrus orchards.
Decision-theoretic approach to data acquisition for transit operations planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ritchie, S.G.
The most costly element of transportation planning and modeling activities in the past has usually been that of data acquisition. This is even truer today when the unit costs of data collection are increasing rapidly and at the same time budgets are severely limited by continuing policies of fiscal austerity in the public sector. The overall objectives of this research were to improve the decisions and decision-making capabilities of transit operators or planners in short-range transit planning, and to improve the quality and cost-effectiveness of associated route or corridor-level data collection and service monitoring activities. A new approach was presentedmore » for sequentially updating the parameters of both simple and multiple linear regression models with stochastic regressors, and for determining the expected value of sample information and expected net gain of sampling for associated sample designs. A new approach was also presented for estimating and updating (both spatially and temporally) the parameters of multinomial logit discrete choice models, and for determining associated optimal sample designs for attribute-based and choice-based sampling methods. The approach provides an effective framework for addressing the issue of optimal sampling method and sample size, which to date have been largely unresolved. The application of these methodologies and the feasibility of the decision-theoretic approach was illustrated with a hypothetical case study example.« less
One-sided truncated sequential t-test: application to natural resource sampling
Gary W. Fowler; William G. O' Regan
1974-01-01
A new procedure for constructing one-sided truncated sequential t-tests and its application to natural resource sampling are described. Monte Carlo procedures were used to develop a series of one-sided truncated sequential t-tests and the associated approximations to the operating characteristic and average sample number functions. Different truncation points and...
An adaptive two-stage sequential design for sampling rare and clustered populations
Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.
2008-01-01
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.
The Neural Representation of Prospective Choice during Spatial Planning and Decisions
Kaplan, Raphael; Koster, Raphael; Penny, William D.; Burgess, Neil; Friston, Karl J.
2017-01-01
We are remarkably adept at inferring the consequences of our actions, yet the neuronal mechanisms that allow us to plan a sequence of novel choices remain unclear. We used functional magnetic resonance imaging (fMRI) to investigate how the human brain plans the shortest path to a goal in novel mazes with one (shallow maze) or two (deep maze) choice points. We observed two distinct anterior prefrontal responses to demanding choices at the second choice point: one in rostrodorsal medial prefrontal cortex (rd-mPFC)/superior frontal gyrus (SFG) that was also sensitive to (deactivated by) demanding initial choices and another in lateral frontopolar cortex (lFPC), which was only engaged by demanding choices at the second choice point. Furthermore, we identified hippocampal responses during planning that correlated with subsequent choice accuracy and response time, particularly in mazes affording sequential choices. Psychophysiological interaction (PPI) analyses showed that coupling between the hippocampus and rd-mPFC increases during sequential (deep versus shallow) planning and is higher before correct versus incorrect choices. In short, using a naturalistic spatial planning paradigm, we reveal how the human brain represents sequential choices during planning without extensive training. Our data highlight a network centred on the cortical midline and hippocampus that allows us to make prospective choices while maintaining initial choices during planning in novel environments. PMID:28081125
Ashley, Kevin; Applegate, Gregory T; Marcy, A Dale; Drake, Pamela L; Pierce, Paul A; Carabin, Nathalie; Demange, Martine
2009-02-01
Because toxicities may differ for Cr(VI) compounds of varying solubility, some countries and organizations have promulgated different occupational exposure limits (OELs) for soluble and insoluble hexavalent chromium (Cr(VI)) compounds, and analytical methods are needed to determine these species in workplace air samples. To address this need, international standard methods ASTM D6832 and ISO 16740 have been published that describe sequential extraction techniques for soluble and insoluble Cr(VI) in samples collected from occupational settings. However, no published performance data were previously available for these Cr(VI) sequential extraction procedures. In this work, the sequential extraction methods outlined in the relevant international standards were investigated. The procedures tested involved the use of either deionized water or an ammonium sulfate/ammonium hydroxide buffer solution to target soluble Cr(VI) species. This was followed by extraction in a sodium carbonate/sodium hydroxide buffer solution to dissolve insoluble Cr(VI) compounds. Three-step sequential extraction with (1) water, (2) sulfate buffer and (3) carbonate buffer was also investigated. Sequential extractions were carried out on spiked samples of soluble, sparingly soluble and insoluble Cr(VI) compounds, and analyses were then generally carried out by using the diphenylcarbazide method. Similar experiments were performed on paint pigment samples and on airborne particulate filter samples collected from stainless steel welding. Potential interferences from soluble and insoluble Cr(III) compounds, as well as from Fe(II), were investigated. Interferences from Cr(III) species were generally absent, while the presence of Fe(II) resulted in low Cr(VI) recoveries. Two-step sequential extraction of spiked samples with (first) either water or sulfate buffer, and then carbonate buffer, yielded quantitative recoveries of soluble Cr(VI) and insoluble Cr(VI), respectively. Three-step sequential extraction gave excessively high recoveries of soluble Cr(VI), low recoveries of sparingly soluble Cr(VI), and quantitative recoveries of insoluble Cr(VI). Experiments on paint pigment samples using two-step extraction with water and carbonate buffer yielded varying percentages of relative fractions of soluble and insoluble Cr(VI). Sequential extractions of stainless steel welding fume air filter samples demonstrated the predominance of soluble Cr(VI) compounds in such samples. The performance data obtained in this work support the Cr(VI) sequential extraction procedures described in the international standards.
A Bayesian sequential design using alpha spending function to control type I error.
Zhu, Han; Yu, Qingzhao
2017-10-01
We propose in this article a Bayesian sequential design using alpha spending functions to control the overall type I error in phase III clinical trials. We provide algorithms to calculate critical values, power, and sample sizes for the proposed design. Sensitivity analysis is implemented to check the effects from different prior distributions, and conservative priors are recommended. We compare the power and actual sample sizes of the proposed Bayesian sequential design with different alpha spending functions through simulations. We also compare the power of the proposed method with frequentist sequential design using the same alpha spending function. Simulations show that, at the same sample size, the proposed method provides larger power than the corresponding frequentist sequential design. It also has larger power than traditional Bayesian sequential design which sets equal critical values for all interim analyses. When compared with other alpha spending functions, O'Brien-Fleming alpha spending function has the largest power and is the most conservative in terms that at the same sample size, the null hypothesis is the least likely to be rejected at early stage of clinical trials. And finally, we show that adding a step of stop for futility in the Bayesian sequential design can reduce the overall type I error and reduce the actual sample sizes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shusharina, N; Khan, F; Sharp, G
Purpose: To determine the dose level and timing of the boost in locally advanced lung cancer patients with confirmed tumor recurrence by comparing different boosting strategies by an impact of dose escalation in improvement of the therapeutic ratio. Methods: We selected eighteen patients with advanced NSCLC and confirmed recurrence. For each patient, a base IMRT plan to 60 Gy prescribed to PTV was created. Then we compared three dose escalation strategies: a uniform escalation to the original PTV, an escalation to a PET-defined target planned sequentially and concurrently. The PET-defined targets were delineated by biologically-weighed regions on a pre-treatment 18F-FDGmore » PET. The maximal achievable dose, without violating the OAR constraints, was identified for each boosting method. The EUD for the target, spinal cord, combined lung, and esophagus was compared for each plan. Results: The average prescribed dose was 70.4±13.9 Gy for the uniform boost, 88.5±15.9 Gy for the sequential boost and 89.1±16.5 Gy for concurrent boost. The size of the boost planning volume was 12.8% (range: 1.4 – 27.9%) of the PTV. The most prescription-limiting dose constraints was the V70 of the esophagus. The EUD within the target increased by 10.6 Gy for the uniform boost, by 31.4 Gy for the sequential boost and by 38.2 for the concurrent boost. The EUD for OARs increased by the following amounts: spinal cord, 3.1 Gy for uniform boost, 2.8 Gy for sequential boost, 5.8 Gy for concurrent boost; combined lung, 1.6 Gy for uniform, 1.1 Gy for sequential, 2.8 Gy for concurrent; esophagus, 4.2 Gy for uniform, 1.3 Gy for sequential, 5.6 Gy for concurrent. Conclusion: Dose escalation to a biologically-weighed gross tumor volume defined on a pre-treatment 18F-FDG PET may provide improved therapeutic ratio without breaching predefined OAR constraints. Sequential boost provides better sparing of OARs as compared with concurrent boost.« less
Movement plans for posture selection do not transfer across hands
Schütz, Christoph; Schack, Thomas
2015-01-01
In a sequential task, the grasp postures people select depend on their movement history. This motor hysteresis effect results from the reuse of former movement plans and reduces the cognitive cost of movement planning. Movement plans for hand trajectories not only transfer across successive trials, but also across hands. We therefore asked whether such a transfer would also be found in movement plans for hand postures. To this end, we designed a sequential, continuous posture selection task. Participants had to open a column of drawers with cylindrical knobs in ascending and descending sequences. A hand switch was required in each sequence. Hand pro/supination was analyzed directly before and after the hand switch. Results showed that hysteresis effects were present directly before, but absent directly after the hand switch. This indicates that, in the current study, movement plans for hand postures only transfer across trials, but not across hands. PMID:26441734
Rossitto, Giacomo; Battistel, Michele; Barbiero, Giulio; Bisogni, Valeria; Maiolino, Giuseppe; Diego, Miotto; Seccia, Teresa M; Rossi, Gian Paolo
2018-02-01
The pulsatile secretion of adrenocortical hormones and a stress reaction occurring when starting adrenal vein sampling (AVS) can affect the selectivity and also the assessment of lateralization when sequential blood sampling is used. We therefore tested the hypothesis that a simulated sequential blood sampling could decrease the diagnostic accuracy of lateralization index for identification of aldosterone-producing adenoma (APA), as compared with bilaterally simultaneous AVS. In 138 consecutive patients who underwent subtyping of primary aldosteronism, we compared the results obtained simultaneously bilaterally when starting AVS (t-15) and 15 min after (t0), with those gained with a simulated sequential right-to-left AVS technique (R ⇒ L) created by combining hormonal values obtained at t-15 and at t0. The concordance between simultaneously obtained values at t-15 and t0, and between simultaneously obtained values and values gained with a sequential R ⇒ L technique, was also assessed. We found a marked interindividual variability of lateralization index values in the patients with bilaterally selective AVS at both time point. However, overall the lateralization index simultaneously determined at t0 provided a more accurate identification of APA than the simulated sequential lateralization indexR ⇒ L (P = 0.001). Moreover, regardless of which side was sampled first, the sequential AVS technique induced a sequence-dependent overestimation of lateralization index. While in APA patients the concordance between simultaneous AVS at t0 and t-15 and between simultaneous t0 and sequential technique was moderate-to-good (K = 0.55 and 0.66, respectively), in non-APA patients, it was poor (K = 0.12 and 0.13, respectively). Sequential AVS generates factitious between-sides gradients, which lower its diagnostic accuracy, likely because of the stress reaction arising upon starting AVS.
Sequential Tests of Multiple Hypotheses Controlling Type I and II Familywise Error Rates
Bartroff, Jay; Song, Jinlin
2014-01-01
This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm’s (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study. PMID:25092948
Manguy, Alys-Marie; Joubert, Lynette; Bansemer, Leah
2016-09-01
The objectives in this article are the exploration of demographic and service usage data gained through clinical data mining audit and suggesting recommendations for social work service delivery model and future research. The method is clinical data-mining audit of 100 sequentially sampled cases gathering quantitative demographic and service usage data. Descriptive analysis of file audit data raised interesting trends with potential to inform service delivery and usage; the key areas of the results included patient demographics, family involvement and impact, and child safety and risk issues. Transport accidents involving children often include other family members. Care planning must take into account psychosocial issues including patient and family emotional responses, availability of primary carers, and other practical needs that may impact on recovery and discharge planning. This study provides evidence to plan for further research and development of more integrated models of care.
DOT National Transportation Integrated Search
2001-10-01
This document is the third in a series of five that present the sequential results of the Thurston Regional Planning Council (TRPC) Regional Intelligent Transportation Systems (ITS) : Planning Project. This document presents an ITS Strategic Depl...
Avanzino, Laura; Pelosin, Elisa; Martino, Davide; Abbruzzese, Giovanni
2013-01-01
Timing of sequential movements is altered in Parkinson disease (PD). Whether timing deficits in internally generated sequential movements in PD depends also on difficulties in motor planning, rather than merely on a defective ability to materially perform the planned movement is still undefined. To unveil this issue, we adopted a modified version of an established test for motor timing, i.e. the synchronization–continuation paradigm, by introducing a motor imagery task. Motor imagery is thought to involve mainly processes of movement preparation, with reduced involvement of end-stage movement execution-related processes. Fourteen patients with PD and twelve matched healthy volunteers were asked to tap in synchrony with a metronome cue (SYNC) and then, when the tone stopped, to keep tapping, trying to maintain the same rhythm (CONT-EXE) or to imagine tapping at the same rhythm, rather than actually performing it (CONT-MI). We tested both a sub-second and a supra-second inter-stimulus interval between the cues. Performance was recorded using a sensor-engineered glove and analyzed measuring the temporal error and the interval reproduction accuracy index. PD patients were less accurate than healthy subjects in the supra-second time reproduction task when performing both continuation tasks (CONT-MI and CONT-EXE), whereas no difference was detected in the synchronization task and on all tasks involving a sub-second interval. Our findings suggest that PD patients exhibit a selective deficit in motor timing for sequential movements that are separated by a supra-second interval and that this deficit may be explained by a defect of motor planning. Further, we propose that difficulties in motor planning are of a sufficient degree of severity in PD to affect also the motor performance in the supra-second time reproduction task. PMID:24086534
Orphan therapies: making best use of postmarket data.
Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling
2014-08-01
Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.
Koopmeiners, Joseph S; Feng, Ziding
2011-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves.
Koopmeiners, Joseph S.; Feng, Ziding
2013-01-01
The receiver operating characteristic (ROC) curve, the positive predictive value (PPV) curve and the negative predictive value (NPV) curve are three measures of performance for a continuous diagnostic biomarker. The ROC, PPV and NPV curves are often estimated empirically to avoid assumptions about the distributional form of the biomarkers. Recently, there has been a push to incorporate group sequential methods into the design of diagnostic biomarker studies. A thorough understanding of the asymptotic properties of the sequential empirical ROC, PPV and NPV curves will provide more flexibility when designing group sequential diagnostic biomarker studies. In this paper we derive asymptotic theory for the sequential empirical ROC, PPV and NPV curves under case-control sampling using sequential empirical process theory. We show that the sequential empirical ROC, PPV and NPV curves converge to the sum of independent Kiefer processes and show how these results can be used to derive asymptotic results for summaries of the sequential empirical ROC, PPV and NPV curves. PMID:24039313
Mars habitat modules: launch, scaling and functional design considerations.
Bell, Larry; Hines, Gerald D
2005-07-01
The Sasakawa International Center for Space Architecture (SICSA) is undertaking a multi-year research, planning and design study that is exploring near- and long-term commercial space development opportunities. The central goal of this activity is to conceptualize a scenario of sequential, integrated private enterprise initiatives that can carry humankind forward to Mars. Each development stage is planned as a building block to provide the economic foundation, technology advancements and operational infrastructure to support others that follow. This report presents fundamental issues and requirements associated with planning human Mars initiatives that can transfer crews, habitats and equipment from Earth to Mars orbit, deliver them to the planet's surface, and return people and samples safely back to Earth. The study builds in part upon previous studies which are summarized in SICSA's: Commercial Space Development Plan and the Artificial Gravity Science and Excursion Vehicle reports. Information and conclusions produced in this study provide assumptions and a conceptual foundation for a subsequent report titled The First Mars Outpost: Planning and Concepts. c2005 Elsevier Ltd. All rights reserved.
RBS Career Education. Evaluation Planning Manual. Education Is Going to Work.
ERIC Educational Resources Information Center
Kershner, Keith M.
Designed for use with the Research for Better Schools career education program, this evaluation planning manual focuses on procedures and issues central to planning the evaluation of an educational program. Following a statement on the need for evaluation, nine sequential steps for evaluation planning are discussed. The first two steps, program…
A Bayesian sequential design with adaptive randomization for 2-sided hypothesis test.
Yu, Qingzhao; Zhu, Lin; Zhu, Han
2017-11-01
Bayesian sequential and adaptive randomization designs are gaining popularity in clinical trials thanks to their potentials to reduce the number of required participants and save resources. We propose a Bayesian sequential design with adaptive randomization rates so as to more efficiently attribute newly recruited patients to different treatment arms. In this paper, we consider 2-arm clinical trials. Patients are allocated to the 2 arms with a randomization rate to achieve minimum variance for the test statistic. Algorithms are presented to calculate the optimal randomization rate, critical values, and power for the proposed design. Sensitivity analysis is implemented to check the influence on design by changing the prior distributions. Simulation studies are applied to compare the proposed method and traditional methods in terms of power and actual sample sizes. Simulations show that, when total sample size is fixed, the proposed design can obtain greater power and/or cost smaller actual sample size than the traditional Bayesian sequential design. Finally, we apply the proposed method to a real data set and compare the results with the Bayesian sequential design without adaptive randomization in terms of sample sizes. The proposed method can further reduce required sample size. Copyright © 2017 John Wiley & Sons, Ltd.
Kingsley, I.S.
1987-01-06
A process and apparatus are disclosed for the separation of complex mixtures of carbonaceous material by sequential elution with successively stronger solvents. In the process, a column containing glass beads is maintained in a fluidized state by a rapidly flowing stream of a weak solvent, and the sample is injected into this flowing stream such that a portion of the sample is dissolved therein and the remainder of the sample is precipitated therein and collected as a uniform deposit on the glass beads. Successively stronger solvents are then passed through the column to sequentially elute less soluble materials. 1 fig.
The K-ABC, Coding, and Planning: An Investigation of Cognitive Processes.
ERIC Educational Resources Information Center
Das, J. P.; And Others
1990-01-01
Evaluated elementary school students (N=198) using six mental processing subtests of the Kaufman Assessment Battery for Children (K-ABC) and three tests of planning. Results indicated orthogonal simultaneous and sequential factors emerged; the verbal simultaneous test was a good addition; and a "planning" factor identified by three…
Examining Age-Related Movement Representations for Sequential (Fine-Motor) Finger Movements
ERIC Educational Resources Information Center
Gabbard, Carl; Cacola, Priscila; Bobbio, Tatiana
2011-01-01
Theory suggests that imagined and executed movement planning relies on internal models for action. Using a chronometry paradigm to compare the movement duration of imagined and executed movements, we tested children aged 7-11 years and adults on their ability to perform sequential finger movements. Underscoring this tactic was our desire to gain a…
Sequential air sampler system : its use by the Virginia Department of Highways & Transportation.
DOT National Transportation Integrated Search
1975-01-01
The Department of Highways & Transportation needs an economical and efficient air quality sampling system for meeting requirements on air monitoring for proposed projects located In critical areas. Two sequential air sampling systems, the ERAI and th...
DOT National Transportation Integrated Search
2011-01-01
This study develops an enhanced transportation planning framework by augmenting the sequential four-step : planning process with post-processing techniques. The post-processing techniques are incorporated through a feedback : mechanism and aim to imp...
Koopmeiners, Joseph S.; Feng, Ziding
2015-01-01
Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180
ERIC Educational Resources Information Center
Myers, Marvin L.; And Others
Presented by the Master Planning Committee of the Colorado Department of Institutions and the Division of Developmental Disabilities is a behavior inventory of sequential skills in four areas basic to the normalization of developmentally disabled persons. Instructional objectives are listed in the following areas: physical, including perceptual…
de Oliveira, Saulo H P; Law, Eleanor C; Shi, Jiye; Deane, Charlotte M
2018-04-01
Most current de novo structure prediction methods randomly sample protein conformations and thus require large amounts of computational resource. Here, we consider a sequential sampling strategy, building on ideas from recent experimental work which shows that many proteins fold cotranslationally. We have investigated whether a pseudo-greedy search approach, which begins sequentially from one of the termini, can improve the performance and accuracy of de novo protein structure prediction. We observed that our sequential approach converges when fewer than 20 000 decoys have been produced, fewer than commonly expected. Using our software, SAINT2, we also compared the run time and quality of models produced in a sequential fashion against a standard, non-sequential approach. Sequential prediction produces an individual decoy 1.5-2.5 times faster than non-sequential prediction. When considering the quality of the best model, sequential prediction led to a better model being produced for 31 out of 41 soluble protein validation cases and for 18 out of 24 transmembrane protein cases. Correct models (TM-Score > 0.5) were produced for 29 of these cases by the sequential mode and for only 22 by the non-sequential mode. Our comparison reveals that a sequential search strategy can be used to drastically reduce computational time of de novo protein structure prediction and improve accuracy. Data are available for download from: http://opig.stats.ox.ac.uk/resources. SAINT2 is available for download from: https://github.com/sauloho/SAINT2. saulo.deoliveira@dtc.ox.ac.uk. Supplementary data are available at Bioinformatics online.
ERIC Educational Resources Information Center
Peters, Richard
A model for Continuous-Integrated-Sequential (C/I/S) curricula for social studies education is presented. The design advocated involves ensuring continuity of instruction from grades K-12, an integration of social studies disciplines, and a sequential process of refining and reinforcing concept and skills from grade-to-grade along the K-12…
Gönner, Lorenz; Vitay, Julien; Hamker, Fred H.
2017-01-01
Hippocampal place-cell sequences observed during awake immobility often represent previous experience, suggesting a role in memory processes. However, recent reports of goals being overrepresented in sequential activity suggest a role in short-term planning, although a detailed understanding of the origins of hippocampal sequential activity and of its functional role is still lacking. In particular, it is unknown which mechanism could support efficient planning by generating place-cell sequences biased toward known goal locations, in an adaptive and constructive fashion. To address these questions, we propose a model of spatial learning and sequence generation as interdependent processes, integrating cortical contextual coding, synaptic plasticity and neuromodulatory mechanisms into a map-based approach. Following goal learning, sequential activity emerges from continuous attractor network dynamics biased by goal memory inputs. We apply Bayesian decoding on the resulting spike trains, allowing a direct comparison with experimental data. Simulations show that this model (1) explains the generation of never-experienced sequence trajectories in familiar environments, without requiring virtual self-motion signals, (2) accounts for the bias in place-cell sequences toward goal locations, (3) highlights their utility in flexible route planning, and (4) provides specific testable predictions. PMID:29075187
ERIC Educational Resources Information Center
Sullins, Walter L.
Five-hundred dichotomously scored response patterns were generated with sequentially independent (SI) items and 500 with dependent (SD) items for each of thirty-six combinations of sampling parameters (i.e., three test lengths, three sample sizes, and four item difficulty distributions). KR-20, KR-21, and Split-Half (S-H) reliabilities were…
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento. Bureau of School Planning.
A floor plan accompanies each of six chronologically arranged schemes for housing educational programs. Scheme A represents the in-line corridor plan whose main characteristics are--(1) double loaded corridors with fixed bearing walls, (2) single window walls providing minimal light and ventilation, and (3) small classrooms with fixed desks and…
Gary Bentrup
2001-01-01
Collaborative planning processes have become increasingly popular for addressing environmental planning issues, resulting in a number of conceptual models for collaboration. A model proposed by Selin and Chavez suggests that collaboration emerges from a series of antecedents and then proceeds sequentially through problem-setting, direction-setting, implementation, and...
Adrenal vein sampling in primary aldosteronism: concordance of simultaneous vs sequential sampling.
Almarzooqi, Mohamed-Karji; Chagnon, Miguel; Soulez, Gilles; Giroux, Marie-France; Gilbert, Patrick; Oliva, Vincent L; Perreault, Pierre; Bouchard, Louis; Bourdeau, Isabelle; Lacroix, André; Therasse, Eric
2017-02-01
Many investigators believe that basal adrenal venous sampling (AVS) should be done simultaneously, whereas others opt for sequential AVS for simplicity and reduced cost. This study aimed to evaluate the concordance of sequential and simultaneous AVS methods. Between 1989 and 2015, bilateral simultaneous sets of basal AVS were obtained twice within 5 min, in 188 consecutive patients (59 women and 129 men; mean age: 53.4 years). Selectivity was defined by adrenal-to-peripheral cortisol ratio ≥2, and lateralization was defined as an adrenal aldosterone-to-cortisol ratio ≥2, the contralateral side. Sequential AVS was simulated using right sampling at -5 min (t = -5) and left sampling at 0 min (t = 0). There was no significant difference in mean selectivity ratio (P = 0.12 and P = 0.42 for the right and left sides respectively) and in mean lateralization ratio (P = 0.93) between t = -5 and t = 0. Kappa for selectivity between 2 simultaneous AVS was 0.71 (95% CI: 0.60-0.82), whereas it was 0.84 (95% CI: 0.76-0.92) and 0.85 (95% CI: 0.77-0.93) between sequential and simultaneous AVS at respectively -5 min and at 0 min. Kappa for lateralization between 2 simultaneous AVS was 0.84 (95% CI: 0.75-0.93), whereas it was 0.86 (95% CI: 0.78-0.94) and 0.80 (95% CI: 0.71-0.90) between sequential AVS and simultaneous AVS at respectively -5 min at 0 min. Concordance between simultaneous and sequential AVS was not different than that between 2 repeated simultaneous AVS in the same patient. Therefore, a better diagnostic performance is not a good argument to select the AVS method. © 2017 European Society of Endocrinology.
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
Group-sequential three-arm noninferiority clinical trial designs
Ochiai, Toshimitsu; Hamasaki, Toshimitsu; Evans, Scott R.; Asakura, Koko; Ohno, Yuko
2016-01-01
We discuss group-sequential three-arm noninferiority clinical trial designs that include active and placebo controls for evaluating both assay sensitivity and noninferiority. We extend two existing approaches, the fixed margin and fraction approaches, into a group-sequential setting with two decision-making frameworks. We investigate the operating characteristics including power, Type I error rate, maximum and expected sample sizes, as design factors vary. In addition, we discuss sample size recalculation and its’ impact on the power and Type I error rate via a simulation study. PMID:26892481
Funnel Libraries for Real-Time Robust Feedback Motion Planning
2016-07-21
motion plans for a robot that are guaranteed to suc- ceed despite uncertainty in the environment, parametric model uncertainty, and disturbances...resulting funnel library is then used to sequentially compose motion plans at runtime while ensuring the safety of the robot . A major advantage of...the work presented here is that by explicitly taking into account the effect of uncertainty, the robot can evaluate motion plans based on how vulnerable
ERIC Educational Resources Information Center
Gooyers, Cobina; And Others
Designed for teachers to provide students with an awareness of the world of nature which surrounds them, the manual presents the philosophy of outdoor education, goals and objectives of the school program, planning for outdoor education, the Wildwood Programs, sequential program planning for students, program booking and resource list. Content…
Concurrent planning and execution for a walking robot
NASA Astrophysics Data System (ADS)
Simmons, Reid
1990-07-01
The Planetary Rover project is developing the Ambler, a novel legged robot, and an autonomous software system for walking the Ambler over rough terrain. As part of the project, we have developed a system that integrates perception, planning, and real-time control to navigate a single leg of the robot through complex obstacle courses. The system is integrated using the Task Control Architecture (TCA), a general-purpose set of utilities for building and controlling distributed mobile robot systems. The walking system, as originally implemented, utilized a sequential sense-plan-act control cycle. This report describes efforts to improve the performance of the system by concurrently planning and executing steps. Concurrency was achieved by modifying the existing sequential system to utilize TCA features such as resource management, monitors, temporal constraints, and hierarchical task trees. Performance was increased in excess of 30 percent with only a relatively modest effort to convert and test the system. The results lend support to the utility of using TCA to develop complex mobile robot systems.
Microscope-Based Fluid Physics Experiments in the Fluids and Combustion Facility on ISS
NASA Technical Reports Server (NTRS)
Doherty, Michael P.; Motil, Susan M.; Snead, John H.; Malarik, Diane C.
2000-01-01
At the NASA Glenn Research Center, the Microgravity Science Program is planning to conduct a large number of experiments on the International Space Station in both the Fluid Physics and Combustion Science disciplines, and is developing flight experiment hardware for use within the International Space Station's Fluids and Combustion Facility. Four fluids physics experiments that require an optical microscope will be sequentially conducted within a subrack payload to the Fluids Integrated Rack of the Fluids and Combustion Facility called the Light Microscopy Module, which will provide the containment, changeout, and diagnostic capabilities to perform the experiments. The Light Microscopy Module is planned as a fully remotely controllable on-orbit microscope facility, allowing flexible scheduling and control of experiments within International Space Station resources. This paper will focus on the four microscope-based experiments, specifically, their objectives and the sample cell and instrument hardware to accommodate their requirements.
NASA Technical Reports Server (NTRS)
Habibi, A.; Batson, B.
1976-01-01
Space Shuttle will be using a field-sequential color television system for the first few missions, but the present plans are to switch to a NTSC color TV system for future missions. The field-sequential color TV system uses a modified black and white camera, producing a TV signal with a digital bandwidth of about 60 Mbps. This article discusses the characteristics of the Shuttle TV systems and proposes a bandwidth-compression technique for the field-sequential color TV system that could operate at 13 Mbps to produce a high-fidelity signal. The proposed bandwidth-compression technique is based on a two-dimensional DPCM system that utilizes temporal, spectral, and spatial correlation inherent in the field-sequential color TV imagery. The proposed system requires about 60 watts and less than 200 integrated circuits.
GOST: A generic ordinal sequential trial design for a treatment trial in an emerging pandemic.
Whitehead, John; Horby, Peter
2017-03-01
Conducting clinical trials to assess experimental treatments for potentially pandemic infectious diseases is challenging. Since many outbreaks of infectious diseases last only six to eight weeks, there is a need for trial designs that can be implemented rapidly in the face of uncertainty. Outbreaks are sudden and unpredictable and so it is essential that as much planning as possible takes place in advance. Statistical aspects of such trial designs should be evaluated and discussed in readiness for implementation. This paper proposes a generic ordinal sequential trial design (GOST) for a randomised clinical trial comparing an experimental treatment for an emerging infectious disease with standard care. The design is intended as an off-the-shelf, ready-to-use robust and flexible option. The primary endpoint is a categorisation of patient outcome according to an ordinal scale. A sequential approach is adopted, stopping as soon as it is clear that the experimental treatment has an advantage or that sufficient advantage is unlikely to be detected. The properties of the design are evaluated using large-sample theory and verified for moderate sized samples using simulation. The trial is powered to detect a generic clinically relevant difference: namely an odds ratio of 2 for better rather than worse outcomes. Total sample sizes (across both treatments) of between 150 and 300 patients prove to be adequate in many cases, but the precise value depends on both the magnitude of the treatment advantage and the nature of the ordinal scale. An advantage of the approach is that any erroneous assumptions made at the design stage about the proportion of patients falling into each outcome category have little effect on the error probabilities of the study, although they can lead to inaccurate forecasts of sample size. It is important and feasible to pre-determine many of the statistical aspects of an efficient trial design in advance of a disease outbreak. The design can then be tailored to the specific disease under study once its nature is better understood.
Forest management planning for timber production: a sequential approach
Krishna P. Rustagi
1978-01-01
Explicit forest management planning for timber production beyond the first few years at any time necessitates use of information which can best be described as suspect. The two-step approach outlined here concentrates on the planning strategy over the next few years without losing sight of the long-run productivity. Frequent updating of the long-range and short-range...
Action Planning in Typically and Atypically Developing Children (Unilateral Cerebral Palsy)
ERIC Educational Resources Information Center
Craje, Celine; Aarts, Pauline; Nijhuis-van der Sanden, Maria; Steenbergen, Bert
2010-01-01
In the present study, we investigated the development of action planning in children with unilateral Cerebral Palsy (CP, aged 3-6 years, n = 24) and an age matched control group. To investigate action planning, participants performed a sequential movement task. They had to grasp an object (a wooden play sword) and place the sword in a hole in a…
Sequential sampling: a novel method in farm animal welfare assessment.
Heath, C A E; Main, D C J; Mullan, S; Haskell, M J; Browne, W J
2016-02-01
Lameness in dairy cows is an important welfare issue. As part of a welfare assessment, herd level lameness prevalence can be estimated from scoring a sample of animals, where higher levels of accuracy are associated with larger sample sizes. As the financial cost is related to the number of cows sampled, smaller samples are preferred. Sequential sampling schemes have been used for informing decision making in clinical trials. Sequential sampling involves taking samples in stages, where sampling can stop early depending on the estimated lameness prevalence. When welfare assessment is used for a pass/fail decision, a similar approach could be applied to reduce the overall sample size. The sampling schemes proposed here apply the principles of sequential sampling within a diagnostic testing framework. This study develops three sequential sampling schemes of increasing complexity to classify 80 fully assessed UK dairy farms, each with known lameness prevalence. Using the Welfare Quality herd-size-based sampling scheme, the first 'basic' scheme involves two sampling events. At the first sampling event half the Welfare Quality sample size is drawn, and then depending on the outcome, sampling either stops or is continued and the same number of animals is sampled again. In the second 'cautious' scheme, an adaptation is made to ensure that correctly classifying a farm as 'bad' is done with greater certainty. The third scheme is the only scheme to go beyond lameness as a binary measure and investigates the potential for increasing accuracy by incorporating the number of severely lame cows into the decision. The three schemes are evaluated with respect to accuracy and average sample size by running 100 000 simulations for each scheme, and a comparison is made with the fixed size Welfare Quality herd-size-based sampling scheme. All three schemes performed almost as well as the fixed size scheme but with much smaller average sample sizes. For the third scheme, an overall association between lameness prevalence and the proportion of lame cows that were severely lame on a farm was found. However, as this association was found to not be consistent across all farms, the sampling scheme did not prove to be as useful as expected. The preferred scheme was therefore the 'cautious' scheme for which a sampling protocol has also been developed.
Naik, Umesh Chandra; Das, Mihir Tanay; Sauran, Swati; Thakur, Indu Shekhar
2014-03-01
The present study compares in vitro toxicity of electroplating effluent after the batch treatment process with that obtained after the sequential treatment process. Activated charcoal prepared from sugarcane bagasse through chemical carbonization, and tolerant indigenous bacteria, Bacillus sp. strain IST105, were used individually and sequentially for the treatment of electroplating effluent. The sequential treatment involving activated charcoal followed by bacterial treatment removed 99% of Cr(VI) compared with the batch processes, which removed 40% (charcoal) and 75% (bacteria), respectively. Post-treatment in vitro cyto/genotoxicity was evaluated by the MTT test and the comet assay in human HuH-7 hepatocarcinoma cells. The sequentially treated sample showed an increase in LC50 value with a 6-fold decrease in comet-assay DNA migration compared with that of untreated samples. A significant decrease in DNA migration and an increase in LC50 value of treated effluent proved the higher effectiveness of the sequential treatment process over the individual batch processes. Copyright © 2014 Elsevier B.V. All rights reserved.
van Staden, J F; Mashamba, Mulalo G; Stefan, Raluca I
2002-09-01
An on-line potentiometric sequential injection titration process analyser for the determination of acetic acid is proposed. A solution of 0.1 mol L(-1) sodium chloride is used as carrier. Titration is achieved by aspirating acetic acid samples between two strong base-zone volumes into a holding coil and by channelling the stack of well-defined zones with flow reversal through a reaction coil to a potentiometric sensor where the peak widths were measured. A linear relationship between peak width and logarithm of the acid concentration was obtained in the range 1-9 g/100 mL. Vinegar samples were analysed without any sample pre-treatment. The method has a relative standard deviation of 0.4% with a sample frequency of 28 samples per hour. The results revealed good agreement between the proposed sequential injection and an automated batch titration method.
ERIC Educational Resources Information Center
Mathinos, Debra A.; Leonard, Ann Scheier
The study examines the use of LOGO, a computer language, with 19 learning disabled (LD) and 19 non-LD students in grades 4-6. Ss were randomly assigned to one of two instructional groups: sequential or whole-task, each with 10 LD and 10 non-LD students. The sequential method features a carefully ordered plan for teaching LOGO commands; the…
NASA Astrophysics Data System (ADS)
Liu, Wei; Ma, Shunjian; Sun, Mingwei; Yi, Haidong; Wang, Zenghui; Chen, Zengqiang
2016-08-01
Path planning plays an important role in aircraft guided systems. Multiple no-fly zones in the flight area make path planning a constrained nonlinear optimization problem. It is necessary to obtain a feasible optimal solution in real time. In this article, the flight path is specified to be composed of alternate line segments and circular arcs, in order to reformulate the problem into a static optimization one in terms of the waypoints. For the commonly used circular and polygonal no-fly zones, geometric conditions are established to determine whether or not the path intersects with them, and these can be readily programmed. Then, the original problem is transformed into a form that can be solved by the sequential quadratic programming method. The solution can be obtained quickly using the Sparse Nonlinear OPTimizer (SNOPT) package. Mathematical simulations are used to verify the effectiveness and rapidity of the proposed algorithm.
Lee, Seonah
2013-10-01
This study aimed to organize the system features of decision support technologies targeted at nursing practice into assessment, problem identification, care plans, implementation, and outcome evaluation. It also aimed to identify the range of the five stage-related sequential decision supports that computerized clinical decision support systems provided. MEDLINE, CINAHL, and EMBASE were searched. A total of 27 studies were reviewed. The system features collected represented the characteristics of each category from patient assessment to outcome evaluation. Several features were common across the reviewed systems. For the sequential decision support, all of the reviewed systems provided decision support in sequence for patient assessment and care plans. Fewer than half of the systems included problem identification. There were only three systems operating in an implementation stage and four systems in outcome evaluation. Consequently, the key steps for sequential decision support functions were initial patient assessment, problem identification, care plan, and outcome evaluation. Providing decision support in such a full scope will effectively help nurses' clinical decision making. By organizing the system features, a comprehensive picture of nursing practice-oriented computerized decision support systems was obtained; however, the development of a guideline for better systems should go beyond the scope of a literature review.
The Evolution of Gene Regulatory Networks that Define Arthropod Body Plans.
Auman, Tzach; Chipman, Ariel D
2017-09-01
Our understanding of the genetics of arthropod body plan development originally stems from work on Drosophila melanogaster from the late 1970s and onward. In Drosophila, there is a relatively detailed model for the network of gene interactions that proceeds in a sequential-hierarchical fashion to define the main features of the body plan. Over the years, we have a growing understanding of the networks involved in defining the body plan in an increasing number of arthropod species. It is now becoming possible to tease out the conserved aspects of these networks and to try to reconstruct their evolution. In this contribution, we focus on several key nodes of these networks, starting from early patterning in which the main axes are determined and the broad morphological domains of the embryo are defined, and on to later stage wherein the growth zone network is active in sequential addition of posterior segments. The pattern of conservation of networks is very patchy, with some key aspects being highly conserved in all arthropods and others being very labile. Many aspects of early axis patterning are highly conserved, as are some aspects of sequential segment generation. In contrast, regional patterning varies among different taxa, and some networks, such as the terminal patterning network, are only found in a limited range of taxa. The growth zone segmentation network is ancient and is probably plesiomorphic to all arthropods. In some insects, it has undergone significant modification to give rise to a more hardwired network that generates individual segments separately. In other insects and in most arthropods, the sequential segmentation network has undergone a significant amount of systems drift, wherein many of the genes have changed. However, it maintains a conserved underlying logic and function. © The Author 2017. Published by Oxford University Press on behalf of the Society for Integrative and Comparative Biology. All rights reserved. For permissions please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
van Maanen, Leendert; van Rijn, Hedderik; Taatgen, Niels
2012-01-01
This article discusses how sequential sampling models can be integrated in a cognitive architecture. The new theory Retrieval by Accumulating Evidence in an Architecture (RACE/A) combines the level of detail typically provided by sequential sampling models with the level of task complexity typically provided by cognitive architectures. We will use…
Environmental Development cum Forest Plantation Planning and Management.
ERIC Educational Resources Information Center
Katoch, C. D.
This textbook covers environmental conservation through forest plantation planning and management for all levels of forestry professionals and non-professionals in India and abroad. The book is divided into six parts and 29 sections in sequential order. Part I contains details on site selection, site preparations, site clearance, layout, and…
Preparing the Teacher of Tomorrow
ERIC Educational Resources Information Center
Hemp, Paul E.
1976-01-01
Suggested ways of planning and conducting high quality teacher preparation programs are discussed under major headings of student selection, sequential courses and experiences, and program design. (HD)
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Man, Jun; Zhang, Jiangjiang; Li, Weixuan
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees ofmore » freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.« less
ERIC Educational Resources Information Center
Jacobson, Peggy F.; Walden, Patrick R.
2013-01-01
Purpose: This study explored the utility of language sample analysis for evaluating language ability in school-age Spanish-English sequential bilingual children. Specifically, the relative potential of lexical diversity and word/morpheme omission as predictors of typical or atypical language status was evaluated. Method: Narrative samples were…
ERIC Educational Resources Information Center
Green, Samuel B.; Thompson, Marilyn S.; Levy, Roy; Lo, Wen-Juo
2015-01-01
Traditional parallel analysis (T-PA) estimates the number of factors by sequentially comparing sample eigenvalues with eigenvalues for randomly generated data. Revised parallel analysis (R-PA) sequentially compares the "k"th eigenvalue for sample data to the "k"th eigenvalue for generated data sets, conditioned on"k"-…
40 CFR 53.34 - Test procedure for methods for PM10 and Class I methods for PM2.5.
Code of Federal Regulations, 2010 CFR
2010-07-01
... simultaneous PM10 or PM2.5 measurements as necessary (see table C-4 of this subpart), each set consisting of...) in appendix A to this subpart). (f) Sequential samplers. For sequential samplers, the sampler shall be configured for the maximum number of sequential samples and shall be set for automatic collection...
Reduction of display artifacts by random sampling
NASA Technical Reports Server (NTRS)
Ahumada, A. J., Jr.; Nagel, D. C.; Watson, A. B.; Yellott, J. I., Jr.
1983-01-01
The application of random-sampling techniques to remove visible artifacts (such as flicker, moire patterns, and paradoxical motion) introduced in TV-type displays by discrete sequential scanning is discussed and demonstrated. Sequential-scanning artifacts are described; the window of visibility defined in spatiotemporal frequency space by Watson and Ahumada (1982 and 1983) and Watson et al. (1983) is explained; the basic principles of random sampling are reviewed and illustrated by the case of the human retina; and it is proposed that the sampling artifacts can be replaced by random noise, which can then be shifted to frequency-space regions outside the window of visibility. Vertical sequential, single-random-sequence, and continuously renewed random-sequence plotting displays generating 128 points at update rates up to 130 Hz are applied to images of stationary and moving lines, and best results are obtained with the single random sequence for the stationary lines and with the renewed random sequence for the moving lines.
Makrakis, Vassilios; Kostoulas-Makrakis, Nelly
2016-02-01
Quantitative and qualitative approaches to planning and evaluation in education for sustainable development have often been treated by practitioners from a single research paradigm. This paper discusses the utility of mixed method evaluation designs which integrate qualitative and quantitative data through a sequential transformative process. Sequential mixed method data collection strategies involve collecting data in an iterative process whereby data collected in one phase contribute to data collected in the next. This is done through examples from a programme addressing the 'Reorientation of University Curricula to Address Sustainability (RUCAS): A European Commission Tempus-funded Programme'. It is argued that the two approaches are complementary and that there are significant gains from combining both. Using methods from both research paradigms does not, however, mean that the inherent differences among epistemologies and methodologies should be neglected. Based on this experience, it is recommended that using a sequential transformative mixed method evaluation can produce more robust results than could be accomplished using a single approach in programme planning and evaluation focussed on education for sustainable development. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
ERIC Educational Resources Information Center
Jackson, LeKeisha D.
2017-01-01
Guided by the research questions, this study utilized a sequential explanatory mixed methods research design to examine senior executive leadership succession planning at four-year, predominately white, doctoral universities in the state of Georgia. Utilizing the Representative Bureaucracy theory and the Mateso SPM conceptual model, this study…
ERIC Educational Resources Information Center
Wonser, Renee; Kohns, Donald
Growing out of a project to improve administrative and support services for North Dakota student vocational organizations, this manual outlines possible duties and responsibilities of a state-level student vocational organizational coordinator based on a sequential planning and implementation system. The manual is divided into four areas with…
A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions
Pan, Guang; Ye, Pengcheng; Yang, Zhidong
2014-01-01
Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206
Somnam, Sarawut; Jakmunee, Jaroon; Grudpan, Kate; Lenghor, Narong; Motomizu, Shoji
2008-12-01
An automated hydrodynamic sequential injection (HSI) system with spectrophotometric detection was developed. Thanks to the hydrodynamic injection principle, simple devices can be used for introducing reproducible microliter volumes of both sample and reagent into the flow channel to form stacked zones in a similar fashion to those in a sequential injection system. The zones were then pushed to the detector and a peak profile was recorded. The determination of nitrite and nitrate in water samples by employing the Griess reaction was chosen as a model. Calibration graphs with linearity in the range of 0.7 - 40 muM were obtained for both nitrite and nitrate. Detection limits were found to be 0.3 muM NO(2)(-) and 0.4 muM NO(3)(-), respectively, with a sample throughput of 20 h(-1) for consecutive determination of both the species. The developed system was successfully applied to the analysis of water samples, employing simple and cost-effective instrumentation and offering higher degrees of automation and low chemical consumption.
Use of exocentric and egocentric representations in the concurrent planning of sequential saccades.
Sharika, K M; Ramakrishnan, Arjun; Murthy, Aditya
2014-11-26
The concurrent planning of sequential saccades offers a simple model to study the nature of visuomotor transformations since the second saccade vector needs to be remapped to foveate the second target following the first saccade. Remapping is thought to occur through egocentric mechanisms involving an efference copy of the first saccade that is available around the time of its onset. In contrast, an exocentric representation of the second target relative to the first target, if available, can be used to directly code the second saccade vector. While human volunteers performed a modified double-step task, we examined the role of exocentric encoding in concurrent saccade planning by shifting the first target location well before the efference copy could be used by the oculomotor system. The impact of the first target shift on concurrent processing was tested by examining the end-points of second saccades following a shift of the second target during the first saccade. The frequency of second saccades to the old versus new location of the second target, as well as the propagation of first saccade localization errors, both indices of concurrent processing, were found to be significantly reduced in trials with the first target shift compared to those without it. A similar decrease in concurrent processing was obtained when we shifted the first target but kept constant the second saccade vector. Overall, these results suggest that the brain can use relatively stable visual landmarks, independent of efference copy-based egocentric mechanisms, for concurrent planning of sequential saccades. Copyright © 2014 the authors 0270-6474/14/3416009-13$15.00/0.
Heuristic and optimal policy computations in the human brain during sequential decision-making.
Korn, Christoph W; Bach, Dominik R
2018-01-23
Optimal decisions across extended time horizons require value calculations over multiple probabilistic future states. Humans may circumvent such complex computations by resorting to easy-to-compute heuristics that approximate optimal solutions. To probe the potential interplay between heuristic and optimal computations, we develop a novel sequential decision-making task, framed as virtual foraging in which participants have to avoid virtual starvation. Rewards depend only on final outcomes over five-trial blocks, necessitating planning over five sequential decisions and probabilistic outcomes. Here, we report model comparisons demonstrating that participants primarily rely on the best available heuristic but also use the normatively optimal policy. FMRI signals in medial prefrontal cortex (MPFC) relate to heuristic and optimal policies and associated choice uncertainties. Crucially, reaction times and dorsal MPFC activity scale with discrepancies between heuristic and optimal policies. Thus, sequential decision-making in humans may emerge from integration between heuristic and optimal policies, implemented by controllers in MPFC.
Decision making and sequential sampling from memory
Shadlen, Michael N.; Shohamy, Daphna
2016-01-01
Decisions take time, and as a rule more difficult decisions take more time. But this only raises the question of what consumes the time. For decisions informed by a sequence of samples of evidence, the answer is straightforward: more samples are available with more time. Indeed the speed and accuracy of such decisions are explained by the accumulation of evidence to a threshold or bound. However, the same framework seems to apply to decisions that are not obviously informed by sequences of evidence samples. Here we proffer the hypothesis that the sequential character of such tasks involves retrieval of evidence from memory. We explore this hypothesis by focusing on value-based decisions and argue that mnemonic processes can account for regularities in choice and decision time. We speculate on the neural mechanisms that link sampling of evidence from memory to circuits that represent the accumulated evidence bearing on a choice. We propose that memory processes may contribute to a wider class of decisions that conform to the regularities of choice-reaction time predicted by the sequential sampling framework. PMID:27253447
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xiaojian; Qiao, Qiao; Department of Radiotherapy, First Hospital of China Medical University, Shenyang
Purpose: To evaluate the efficiency of standard image-guided radiation therapy (IGRT) to account for lumpectomy cavity (LC) variation during whole-breast irradiation (WBI) and propose an adaptive strategy to improve dosimetry if IGRT fails to address the interfraction LC variations. Methods and Materials: Daily diagnostic-quality CT data acquired during IGRT in the boost stage using an in-room CT for 19 breast cancer patients treated with sequential boost after WBI in the prone position were retrospectively analyzed. Contours of the LC, treated breast, ipsilateral lung, and heart were generated by populating contours from planning CTs to boost fraction CTs using an auto-segmentationmore » tool with manual editing. Three plans were generated on each fraction CT: (1) a repositioning plan by applying the original boost plan with the shift determined by IGRT; (2) an adaptive plan by modifying the original plan according to a fraction CT; and (3) a reoptimization plan by a full-scale optimization. Results: Significant variations were observed in LC. The change in LC volume at the first boost fraction ranged from a 70% decrease to a 50% increase of that on the planning CT. The adaptive and reoptimization plans were comparable. Compared with the repositioning plans, the adaptive plans led to an improvement in target coverage for an increased LC case (1 of 19, 7.5% increase in planning target volume evaluation volume V{sub 95%}), and breast tissue sparing for an LC decrease larger than 35% (3 of 19, 7.5% decrease in breast evaluation volume V{sub 50%}; P=.008). Conclusion: Significant changes in LC shape and volume at the time of boost that deviate from the original plan for WBI with sequential boost can be addressed by adaptive replanning at the first boost fraction.« less
Mauz, Elvira; von der Lippe, Elena; Allen, Jennifer; Schilling, Ralph; Müters, Stephan; Hoebel, Jens; Schmich, Patrick; Wetzstein, Matthias; Kamtsiuris, Panagiotis; Lange, Cornelia
2018-01-01
Population-based surveys currently face the problem of decreasing response rates. Mixed-mode designs are now being implemented more often to account for this, to improve sample composition and to reduce overall costs. This study examines whether a concurrent or sequential mixed-mode design achieves better results on a number of indicators of survey quality. Data were obtained from a population-based health interview survey of adults in Germany that was conducted as a methodological pilot study as part of the German Health Update (GEDA). Participants were randomly allocated to one of two surveys; each of the surveys had a different design. In the concurrent mixed-mode design ( n = 617) two types of self-administered questionnaires (SAQ-Web and SAQ-Paper) and computer-assisted telephone interviewing were offered simultaneously to the respondents along with the invitation to participate. In the sequential mixed-mode design ( n = 561), SAQ-Web was initially provided, followed by SAQ-Paper, with an option for a telephone interview being sent out together with the reminders at a later date. Finally, this study compared the response rates, sample composition, health indicators, item non-response, the scope of fieldwork and the costs of both designs. No systematic differences were identified between the two mixed-mode designs in terms of response rates, the socio-demographic characteristics of the achieved samples, or the prevalence rates of the health indicators under study. The sequential design gained a higher rate of online respondents. Very few telephone interviews were conducted for either design. With regard to data quality, the sequential design (which had more online respondents) showed less item non-response. There were minor differences between the designs in terms of their costs. Postage and printing costs were lower in the concurrent design, but labour costs were lower in the sequential design. No differences in health indicators were found between the two designs. Modelling these results for higher response rates and larger net sample sizes indicated that the sequential design was more cost and time-effective. This study contributes to the research available on implementing mixed-mode designs as part of public health surveys. Our findings show that SAQ-Paper and SAQ-Web questionnaires can be combined effectively. Sequential mixed-mode designs with higher rates of online respondents may be of greater benefit to studies with larger net sample sizes than concurrent mixed-mode designs.
Sequential decision making in computational sustainability via adaptive submodularity
Krause, Andreas; Golovin, Daniel; Converse, Sarah J.
2015-01-01
Many problems in computational sustainability require making a sequence of decisions in complex, uncertain environments. Such problems are generally notoriously difficult. In this article, we review the recently discovered notion of adaptive submodularity, an intuitive diminishing returns condition that generalizes the classical notion of submodular set functions to sequential decision problems. Problems exhibiting the adaptive submodularity property can be efficiently and provably near-optimally solved using simple myopic policies. We illustrate this concept in several case studies of interest in computational sustainability: First, we demonstrate how it can be used to efficiently plan for resolving uncertainty in adaptive management scenarios. Secondly, we show how it applies to dynamic conservation planning for protecting endangered species, a case study carried out in collaboration with the US Geological Survey and the US Fish and Wildlife Service.
Jakobi, Annika; Stützer, Kristin; Bandurska-Luque, Anna; Löck, Steffen; Haase, Robert; Wack, Linda-Jacqueline; Mönnich, David; Thorwarth, Daniel; Perez, Damien; Lühr, Armin; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Perrin, Rosalind; Richter, Christian
2015-01-01
To determine by treatment plan comparison differences in toxicity risk reduction for patients with head and neck squamous cell carcinoma (HNSCC) from proton therapy either used for complete treatment or sequential boost treatment only. For 45 HNSCC patients, intensity-modulated photon (IMXT) and proton (IMPT) treatment plans were created including a dose escalation via simultaneous integrated boost with a one-step adaptation strategy after 25 fractions for sequential boost treatment. Dose accumulation was performed for pure IMXT treatment, pure IMPT treatment and for a mixed modality treatment with IMXT for the elective target followed by a sequential boost with IMPT. Treatment plan evaluation was based on modern normal tissue complication probability (NTCP) models for mucositis, xerostomia, aspiration, dysphagia, larynx edema and trismus. Individual NTCP differences between IMXT and IMPT (∆NTCPIMXT-IMPT) as well as between IMXT and the mixed modality treatment (∆NTCPIMXT-Mix) were calculated. Target coverage was similar in all three scenarios. NTCP values could be reduced in all patients using IMPT treatment. However, ∆NTCPIMXT-Mix values were a factor 2-10 smaller than ∆NTCPIMXT-IMPT. Assuming a threshold of ≥ 10% NTCP reduction in xerostomia or dysphagia risk as criterion for patient assignment to IMPT, less than 15% of the patients would be selected for a proton boost, while about 50% would be assigned to pure IMPT treatment. For mucositis and trismus, ∆NTCP ≥ 10% occurred in six and four patients, respectively, with pure IMPT treatment, while no such difference was identified with the proton boost. The use of IMPT generally reduces the expected toxicity risk while maintaining good tumor coverage in the examined HNSCC patients. A mixed modality treatment using IMPT solely for a sequential boost reduces the risk by 10% only in rare cases. In contrast, pure IMPT treatment may be reasonable for about half of the examined patient cohort considering the toxicities xerostomia and dysphagia, if a feasible strategy for patient anatomy changes is implemented.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-31
... nitrogen oxides. xiii. The initials NPS mean or refer to National Park Service. xiv. The initials PM 2.5..., nitrogen deposition, and mercury emissions and deposition. The State spent considerable time and conducted sequential and extended hearings to develop a plan which seeks to balance a number of variables beyond those...
Thematic Unit Planning in Social Studies: Make It Focused and Meaningful
ERIC Educational Resources Information Center
Horton, Todd A.; Barnett, Jennifer A.
2008-01-01
Unit planning is perhaps the most difficult of the teacher duties to execute well. This paper offers suggestions for improving focus and increasing the meaningfulness of thematic unit content for students. Stressing the concept of a Big Understanding, it outlines 6 sequential steps in the creation of units which, when applied, not only establish a…
ERIC Educational Resources Information Center
Bailey, Suzanne Powers; Jeffers, Marcia
Eighteen interrelated, sequential lesson plans and supporting materials for teaching computer literacy at the elementary and secondary levels are presented. The activities, intended to be infused into the regular curriculum, do not require the use of a computer. The introduction presents background information on computer literacy, suggests a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Serin, E.; Codel, G.; Mabhouti, H.
Purpose: In small field geometries, the electronic equilibrium can be lost, making it challenging for the dose-calculation algorithm to accurately predict the dose, especially in the presence of tissue heterogeneities. In this study, dosimetric accuracy of Monte Carlo (MC) advanced dose calculation and sequential algorithms of Multiplan treatment planning system were investigated for small radiation fields incident on homogeneous and heterogeneous geometries. Methods: Small open fields of fixed cones of Cyberknife M6 unit 100 to 500 mm2 were used for this study. The fields were incident on in house phantom containing lung, air, and bone inhomogeneities and also homogeneous phantom.more » Using the same film batch, the net OD to dose calibration curve was obtained using CK with the 60 mm fixed cone by delivering 0- 800 cGy. Films were scanned 48 hours after irradiation using an Epson 1000XL flatbed scanner. The dosimetric accuracy of MC and sequential algorithms in the presence of the inhomogeneities was compared against EBT3 film dosimetry Results: Open field tests in a homogeneous phantom showed good agreement between two algorithms and film measurement For MC algorithm, the minimum gamma analysis passing rates between measured and calculated dose distributions were 99.7% and 98.3% for homogeneous and inhomogeneous fields in the case of lung and bone respectively. For sequential algorithm, the minimum gamma analysis passing rates were 98.9% and 92.5% for for homogeneous and inhomogeneous fields respectively for used all cone sizes. In the case of the air heterogeneity, the differences were larger for both calculation algorithms. Overall, when compared to measurement, the MC had better agreement than sequential algorithm. Conclusion: The Monte Carlo calculation algorithm in the Multiplan treatment planning system is an improvement over the existing sequential algorithm. Dose discrepancies were observed for in the presence of air inhomogeneities.« less
Simultaneous integrated vs. sequential boost in VMAT radiotherapy of high-grade gliomas.
Farzin, Mostafa; Molls, Michael; Astner, Sabrina; Rondak, Ina-Christine; Oechsner, Markus
2015-12-01
In 20 patients with high-grade gliomas, we compared two methods of planning for volumetric-modulated arc therapy (VMAT): simultaneous integrated boost (SIB) vs. sequential boost (SEB). The investigation focused on the analysis of dose distributions in the target volumes and the organs at risk (OARs). After contouring the target volumes [planning target volumes (PTVs) and boost volumes (BVs)] and OARs, SIB planning and SEB planning were performed. The SEB method consisted of two plans: in the first plan the PTV received 50 Gy in 25 fractions with a 2-Gy dose per fraction. In the second plan the BV received 10 Gy in 5 fractions with a dose per fraction of 2 Gy. The doses of both plans were summed up to show the total doses delivered. In the SIB method the PTV received 54 Gy in 30 fractions with a dose per fraction of 1.8 Gy, while the BV received 60 Gy in the same fraction number but with a dose per fraction of 2 Gy. All of the OARs showed higher doses (Dmax and Dmean) in the SEB method when compared with the SIB technique. The differences between the two methods were statistically significant in almost all of the OARs. Analysing the total doses of the target volumes we found dose distributions with similar homogeneities and comparable total doses. Our analysis shows that the SIB method offers advantages over the SEB method in terms of sparing OARs.
Odéen, Henrik; Todd, Nick; Diakite, Mahamadou; Minalga, Emilee; Payne, Allison; Parker, Dennis L.
2014-01-01
Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemes utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm3 FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations. PMID:25186406
Vuckovic, Anita; Kwantes, Peter J; Humphreys, Michael; Neal, Andrew
2014-03-01
Signal Detection Theory (SDT; Green & Swets, 1966) is a popular tool for understanding decision making. However, it does not account for the time taken to make a decision, nor why response bias might change over time. Sequential sampling models provide a way of accounting for speed-accuracy trade-offs and response bias shifts. In this study, we test the validity of a sequential sampling model of conflict detection in a simulated air traffic control task by assessing whether two of its key parameters respond to experimental manipulations in a theoretically consistent way. Through experimental instructions, we manipulated participants' response bias and the relative speed or accuracy of their responses. The sequential sampling model was able to replicate the trends in the conflict responses as well as response time across all conditions. Consistent with our predictions, manipulating response bias was associated primarily with changes in the model's Criterion parameter, whereas manipulating speed-accuracy instructions was associated with changes in the Threshold parameter. The success of the model in replicating the human data suggests we can use the parameters of the model to gain an insight into the underlying response bias and speed-accuracy preferences common to dynamic decision-making tasks. © 2013 American Psychological Association
A random walk rule for phase I clinical trials.
Durham, S D; Flournoy, N; Rosenberger, W F
1997-06-01
We describe a family of random walk rules for the sequential allocation of dose levels to patients in a dose-response study, or phase I clinical trial. Patients are sequentially assigned the next higher, same, or next lower dose level according to some probability distribution, which may be determined by ethical considerations as well as the patient's response. It is shown that one can choose these probabilities in order to center dose level assignments unimodally around any target quantile of interest. Estimation of the quantile is discussed; the maximum likelihood estimator and its variance are derived under a two-parameter logistic distribution, and the maximum likelihood estimator is compared with other nonparametric estimators. Random walk rules have clear advantages: they are simple to implement, and finite and asymptotic distribution theory is completely worked out. For a specific random walk rule, we compute finite and asymptotic properties and give examples of its use in planning studies. Having the finite distribution theory available and tractable obviates the need for elaborate simulation studies to analyze the properties of the design. The small sample properties of our rule, as determined by exact theory, compare favorably to those of the continual reassessment method, determined by simulation.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
Capture and culture: organizational identity in New York Blue Cross.
Brown, L D
1991-01-01
This article explores the changing corporate culture of New York's Blue Cross and Blue Shield plan in its first fifty years. As the plan grew, corporate culture evolved over four sequential phases: the plan first had the character of an experiment, then that of a movement, a business, and, most recently, a corporate agglomerate. Accompanying this evolution has been an identity crisis, as the need to adapt to a turbulent environment has challenged the plan's settled understanding of its core values, namely, voluntarism, community, and cooperation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Chen, Ray -Bing; Wang, Weichung; Jeff Wu, C. F.
2017-04-12
A numerical method, called OBSM, was recently proposed which employs overcomplete basis functions to achieve sparse representations. While the method can handle non-stationary response without the need of inverting large covariance matrices, it lacks the capability to quantify uncertainty in predictions. We address this issue by proposing a Bayesian approach which first imposes a normal prior on the large space of linear coefficients, then applies the MCMC algorithm to generate posterior samples for predictions. From these samples, Bayesian credible intervals can then be obtained to assess prediction uncertainty. A key application for the proposed method is the efficient construction ofmore » sequential designs. Several sequential design procedures with different infill criteria are proposed based on the generated posterior samples. As a result, numerical studies show that the proposed schemes are capable of solving problems of positive point identification, optimization, and surrogate fitting.« less
Van Parijs, Hilde; Reynders, Truus; Heuninckx, Karina; Verellen, Dirk; Storme, Guy; De Ridder, Mark
2014-01-01
Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB) compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001). There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04). The dose to the organs at risk (OAR) was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine.
Reynders, Truus; Heuninckx, Karina; Verellen, Dirk; Storme, Guy; De Ridder, Mark
2014-01-01
Background. Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB) compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. Methods. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. Results. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001). There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04). The dose to the organs at risk (OAR) was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. Conclusions. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine. PMID:25162031
Logistics planning for phased programs.
NASA Technical Reports Server (NTRS)
Cook, W. H.
1973-01-01
It is pointed out that the proper and early integration of logistics planning into the phased program planning process will drastically reduce these logistics costs. Phased project planning is a phased approach to the planning, approval, and conduct of major research and development activity. A progressive build-up of knowledge of all aspects of the program is provided. Elements of logistics are discussed together with aspects of integrated logistics support, logistics program planning, and logistics activities for phased programs. Continuing logistics support can only be assured if there is a comprehensive sequential listing of all logistics activities tied to the program schedule and a real-time inventory of assets.
Evaluating projects for improving fish and wildlife habitat on National Forests.
Fred H. Everest; Daniel R. Talhelm
1982-01-01
Recent legislation (PL. 93-452; P.L. 94-588) has emphasized improvement of fish and wildlife habitat on lands of the National Forest System. A sequential procedure has been developed for screening potential projects to identify those producing the greatest fishery benefits. The procedureâwhich includes program planning, project planning, and intensive benefit/cost...
ERIC Educational Resources Information Center
Martin, Nancy
Presented is a technical report concerning the use of a mathematical model describing certain aspects of the duplication and selection processes in natural genetic adaptation. This reproductive plan/model occurs in artificial genetics (the use of ideas from genetics to develop general problem solving techniques for computers). The reproductive…
The role of action control and action planning on fruit and vegetable consumption.
Zhou, Guangyu; Gan, Yiqun; Miao, Miao; Hamilton, Kyra; Knoll, Nina; Schwarzer, Ralf
2015-08-01
Globally, fruit and vegetable intake is lower than recommended despite being an important component to a healthy diet. Adopting or maintaining a sufficient amount of fruit and vegetables in one's diet may require not only motivation but also self-regulatory processes. Action control and action planning are two key volitional determinants that have been identified in the literature; however, it is not fully understood how these two factors operate between intention and behavior. Thus, the aim of the current study was to explore the roles of action control and action planning as mediators between intentions and dietary behavior. A longitudinal study with three points in time was conducted. Participants (N = 286) were undergraduate students and invited to participate in a health behavior survey. At baseline (Time 1), measures of intention and fruit and vegetable intake were assessed. Two weeks later (Time 2), action control and action planning were assessed as putative sequential mediators. At Time 3 (two weeks after Time 2), fruit and vegetable consumption was measured as the outcome. The results revealed action control and action planning to sequentially mediate between intention and subsequent fruit and vegetable intake, controlling for baseline behavior. Both self-regulatory constructs, action control and action planning, make a difference when moving from motivation to action. Our preliminary evidence, therefore, suggests that planning may be more proximal to fruit and vegetable intake than action control. Further research, however, needs to be undertaken to substantiate this conclusion. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Moores, John E.; Francis, Raymond; Mader, Marianne; Osinski, G. R.; Barfoot, T.; Barry, N.; Basic, G.; Battler, M.; Beauchamp, M.; Blain, S.; Bondy, M.; Capitan, R.-D.; Chanou, A.; Clayton, J.; Cloutis, E.; Daly, M.; Dickinson, C.; Dong, H.; Flemming, R.; Furgale, P.; Gammel, J.; Gharfoor, N.; Hussein, M.; Grieve, R.; Henrys, H.; Jaziobedski, P.; Lambert, A.; Leung, K.; Marion, C.; McCullough, E.; McManus, C.; Neish, C. D.; Ng, H. K.; Ozaruk, A.; Pickersgill, A.; Preston, L. J.; Redman, D.; Sapers, H.; Shankar, B.; Singleton, A.; Souders, K.; Stenning, B.; Stooke, P.; Sylvester, P.; Tornabene, L.
2012-12-01
A Mission Control Architecture is presented for a Robotic Lunar Sample Return Mission which builds upon the experience of the landed missions of the NASA Mars Exploration Program. This architecture consists of four separate processes working in parallel at Mission Control and achieving buy-in for plans sequentially instead of simultaneously from all members of the team. These four processes were: science processing, science interpretation, planning and mission evaluation. science processing was responsible for creating products from data downlinked from the field and is organized by instrument. Science Interpretation was responsible for determining whether or not science goals are being met and what measurements need to be taken to satisfy these goals. The Planning process, responsible for scheduling and sequencing observations, and the Evaluation process that fostered inter-process communications, reporting and documentation assisted these processes. This organization is advantageous for its flexibility as shown by the ability of the structure to produce plans for the rover every two hours, for the rapidity with which Mission Control team members may be trained and for the relatively small size of each individual team. This architecture was tested in an analogue mission to the Sudbury impact structure from June 6-17, 2011. A rover was used which was capable of developing a network of locations that could be revisited using a teach and repeat method. This allowed the science team to process several different outcrops in parallel, downselecting at each stage to ensure that the samples selected for caching were the most representative of the site. Over the course of 10 days, 18 rock samples were collected from 5 different outcrops, 182 individual field activities - such as roving or acquiring an image mosaic or other data product - were completed within 43 command cycles, and the rover travelled over 2200 m. Data transfer from communications passes were filled to 74%. Sample triage was simulated to allow down-selection to 1 kg of material for return to Earth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malyapa, Robert; Lowe, Matthew; Christie Medical Physics and Engineering, The Christie NHS Foundation Trust, Manchester
Purpose: To evaluate the robustness of head and neck plans for treatment with intensity modulated proton therapy to range and setup errors, and to establish robustness parameters for the planning of future head and neck treatments. Methods and Materials: Ten patients previously treated were evaluated in terms of robustness to range and setup errors. Error bar dose distributions were generated for each plan, from which several metrics were extracted and used to define a robustness database of acceptable parameters over all analyzed plans. The patients were treated in sequentially delivered series, and plans were evaluated for both the first seriesmore » and for the combined error over the whole treatment. To demonstrate the application of such a database in the head and neck, for 1 patient, an alternative treatment plan was generated using a simultaneous integrated boost (SIB) approach and plans of differing numbers of fields. Results: The robustness database for the treatment of head and neck patients is presented. In an example case, comparison of single and multiple field plans against the database show clear improvements in robustness by using multiple fields. A comparison of sequentially delivered series and an SIB approach for this patient show both to be of comparable robustness, although the SIB approach shows a slightly greater sensitivity to uncertainties. Conclusions: A robustness database was created for the treatment of head and neck patients with intensity modulated proton therapy based on previous clinical experience. This will allow the identification of future plans that may benefit from alternative planning approaches to improve robustness.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H; Dong, P; Xing, L
Purpose: Traditional radiotherapy inverse planning relies on the weighting factors to phenomenologically balance the conflicting criteria for different structures. The resulting manual trial-and-error determination of the weights has long been recognized as the most time-consuming part of treatment planning. The purpose of this work is to develop an inverse planning framework that parameterizes the inter-structural dosimetric tradeoff among with physically more meaningful quantities to simplify the search for a clinically sensible plan. Methods: A permissible dosimetric uncertainty is introduced for each of the structures to balance their conflicting dosimetric requirements. The inverse planning is then formulated as a convex feasibilitymore » problem, which aims to generate plans with acceptable dosimetric uncertainties. A sequential procedure (SP) is derived to decompose the model into three submodels to constrain the uncertainty in the planning target volume (PTV), the critical structures, and all other structures to spare, sequentially. The proposed technique is applied to plan a liver case and a head-and-neck case and compared with a conventional approach. Results: Our results show that the strategy is able to generate clinically sensible plans with little trial-and-error. In the case of liver IMRT, the fractional volumes to liver and heart above 20Gy are found to be 22% and 10%, respectively, which are 15.1% and 33.3% lower than that of the counterpart conventional plan while maintaining the same PTV coverage. The planning of the head and neck IMRT show the same level of success, with the DVHs for all organs at risk and PTV very competitive to a counterpart plan. Conclusion: A new inverse planning framework has been established. With physically more meaningful modeling of the inter-structural tradeoff, the technique enables us to substantially reduce the need for trial-and-error adjustment of the model parameters and opens new opportunities of incorporating prior knowledge to facilitate the treatment planning process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odéen, Henrik, E-mail: h.odeen@gmail.com; Diakite, Mahamadou; Todd, Nick
2014-09-15
Purpose: To investigate k-space subsampling strategies to achieve fast, large field-of-view (FOV) temperature monitoring using segmented echo planar imaging (EPI) proton resonance frequency shift thermometry for MR guided high intensity focused ultrasound (MRgHIFU) applications. Methods: Five different k-space sampling approaches were investigated, varying sample spacing (equally vs nonequally spaced within the echo train), sampling density (variable sampling density in zero, one, and two dimensions), and utilizing sequential or centric sampling. Three of the schemes utilized sequential sampling with the sampling density varied in zero, one, and two dimensions, to investigate sampling the k-space center more frequently. Two of the schemesmore » utilized centric sampling to acquire the k-space center with a longer echo time for improved phase measurements, and vary the sampling density in zero and two dimensions, respectively. Phantom experiments and a theoretical point spread function analysis were performed to investigate their performance. Variable density sampling in zero and two dimensions was also implemented in a non-EPI GRE pulse sequence for comparison. All subsampled data were reconstructed with a previously described temporally constrained reconstruction (TCR) algorithm. Results: The accuracy of each sampling strategy in measuring the temperature rise in the HIFU focal spot was measured in terms of the root-mean-square-error (RMSE) compared to fully sampled “truth.” For the schemes utilizing sequential sampling, the accuracy was found to improve with the dimensionality of the variable density sampling, giving values of 0.65 °C, 0.49 °C, and 0.35 °C for density variation in zero, one, and two dimensions, respectively. The schemes utilizing centric sampling were found to underestimate the temperature rise, with RMSE values of 1.05 °C and 1.31 °C, for variable density sampling in zero and two dimensions, respectively. Similar subsampling schemes with variable density sampling implemented in zero and two dimensions in a non-EPI GRE pulse sequence both resulted in accurate temperature measurements (RMSE of 0.70 °C and 0.63 °C, respectively). With sequential sampling in the described EPI implementation, temperature monitoring over a 192 × 144 × 135 mm{sup 3} FOV with a temporal resolution of 3.6 s was achieved, while keeping the RMSE compared to fully sampled “truth” below 0.35 °C. Conclusions: When segmented EPI readouts are used in conjunction with k-space subsampling for MR thermometry applications, sampling schemes with sequential sampling, with or without variable density sampling, obtain accurate phase and temperature measurements when using a TCR reconstruction algorithm. Improved temperature measurement accuracy can be achieved with variable density sampling. Centric sampling leads to phase bias, resulting in temperature underestimations.« less
Phosphorus Concentrations in Sequentially Fractionated Soil Samples as Affected by Digestion Methods
do Nascimento, Carlos A. C.; Pagliari, Paulo H.; Schmitt, Djalma; He, Zhongqi; Waldrip, Heidi
2015-01-01
Sequential fractionation has helped improving our understanding of the lability and bioavailability of P in soil. Nevertheless, there have been no reports on how manipulation of the different fractions prior to analyses affects the total P (TP) concentrations measured. This study investigated the effects of sample digestion, filtration, and acidification on the TP concentrations determined by ICP-OES in 20 soil samples. Total P in extracts were either determined without digestion by ICP-OES, or ICP-OES following block digestion, or autoclave digestion. The effects of sample filtration, and acidification on undigested alkaline extracts prior to ICP-OES were also evaluated. Results showed that, TP concentrations were greatest in the block-digested extracts, though the variability introduced by the block-digestion was the highest. Acidification of NaHCO3 extracts resulted in lower TP concentrations, while acidification of NaOH randomly increased or decreased TP concentrations. The precision observed with ICP-OES of undigested extracts suggests this should be the preferred method for TP determination in sequentially extracted samples. Thus, observations reported in this work would be helpful in appropriate sample handling for P determination, thereby improving the precision of P determination. The results are also useful for literature data comparison and discussion when there are differences in sample treatments. PMID:26647644
Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Abotteen, K. M. (Principal Investigator)
1980-01-01
The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.
Chung, Sukhoon; Rhee, Hyunsill; Suh, Yongmoo
2010-01-01
Objectives This study sought to find answers to the following questions: 1) Can we predict whether a patient will revisit a healthcare center? 2) Can we anticipate diseases of patients who revisit the center? Methods For the first question, we applied 5 classification algorithms (decision tree, artificial neural network, logistic regression, Bayesian networks, and Naïve Bayes) and the stacking-bagging method for building classification models. To solve the second question, we performed sequential pattern analysis. Results We determined: 1) In general, the most influential variables which impact whether a patient of a public healthcare center will revisit it or not are personal burden, insurance bill, period of prescription, age, systolic pressure, name of disease, and postal code. 2) The best plain classification model is dependent on the dataset. 3) Based on average of classification accuracy, the proposed stacking-bagging method outperformed all traditional classification models and our sequential pattern analysis revealed 16 sequential patterns. Conclusions Classification models and sequential patterns can help public healthcare centers plan and implement healthcare service programs and businesses that are more appropriate to local residents, encouraging them to revisit public health centers. PMID:21818426
Reactivation, Replay, and Preplay: How It Might All Fit Together
Buhry, Laure; Azizi, Amir H.; Cheng, Sen
2011-01-01
Sequential activation of neurons that occurs during “offline” states, such as sleep or awake rest, is correlated with neural sequences recorded during preceding exploration phases. This so-called reactivation, or replay, has been observed in a number of different brain regions such as the striatum, prefrontal cortex, primary visual cortex and, most prominently, the hippocampus. Reactivation largely co-occurs together with hippocampal sharp-waves/ripples, brief high-frequency bursts in the local field potential. Here, we first review the mounting evidence for the hypothesis that reactivation is the neural mechanism for memory consolidation during sleep. We then discuss recent results that suggest that offline sequential activity in the waking state might not be simple repetitions of previously experienced sequences. Some offline sequential activity occurs before animals are exposed to a novel environment for the first time, and some sequences activated offline correspond to trajectories never experienced by the animal. We propose a conceptual framework for the dynamics of offline sequential activity that can parsimoniously describe a broad spectrum of experimental results. These results point to a potentially broader role of offline sequential activity in cognitive functions such as maintenance of spatial representation, learning, or planning. PMID:21918724
Phosphorus concentrations in sequentially fractionated soil samples as affected by digestion methods
USDA-ARS?s Scientific Manuscript database
Sequential fractionation has been used for several decades for improving our understanding on the effects of agricultural practices and management on the lability and bioavailability of phosphorus in soil, manure, and other soil amendments. Nevertheless, there have been no reports on how manipulatio...
Sample size determination in group-sequential clinical trials with two co-primary endpoints
Asakura, Koko; Hamasaki, Toshimitsu; Sugimoto, Tomoyuki; Hayashi, Kenichi; Evans, Scott R; Sozu, Takashi
2014-01-01
We discuss sample size determination in group-sequential designs with two endpoints as co-primary. We derive the power and sample size within two decision-making frameworks. One is to claim the test intervention’s benefit relative to control when superiority is achieved for the two endpoints at the same interim timepoint of the trial. The other is when the superiority is achieved for the two endpoints at any interim timepoint, not necessarily simultaneously. We evaluate the behaviors of sample size and power with varying design elements and provide a real example to illustrate the proposed sample size methods. In addition, we discuss sample size recalculation based on observed data and evaluate the impact on the power and Type I error rate. PMID:24676799
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-04-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.
Sequential extraction procedures are used to determine the solid-phase association in which elements of interest exist in soil and sediment matrices. Foundational work by Tessier et al. (1) has found widespread acceptance and has worked tolerably as an operational definition for...
Sequential Requests and the Problem of Message Sampling.
ERIC Educational Resources Information Center
Cantrill, James Gerard
S. Jackson and S. Jacobs's criticism of "single message" designs in communication research served as a framework for a study that examined the differences between various sequential request paradigms. The study sought to answer the following questions: (1) What were the most naturalistic request sequences assured to replicate…
Phosphorus concentrations in sequentially fractionated soil samples as affected by digestion methods
USDA-ARS?s Scientific Manuscript database
Sequential fractionation has been used for several decades for improving our understanding on the effects of agricultural practices and management on the lability and bioavailability of P in soil, manure, and other soil amendments. Nevertheless, there have been no reports on how manipulation of diff...
Sequential Multiplex Analyte Capturing for Phosphoprotein Profiling*
Poetz, Oliver; Henzler, Tanja; Hartmann, Michael; Kazmaier, Cornelia; Templin, Markus F.; Herget, Thomas; Joos, Thomas O.
2010-01-01
Microarray-based sandwich immunoassays can simultaneously detect dozens of proteins. However, their use in quantifying large numbers of proteins is hampered by cross-reactivity and incompatibilities caused by the immunoassays themselves. Sequential multiplex analyte capturing addresses these problems by repeatedly probing the same sample with different sets of antibody-coated, magnetic suspension bead arrays. As a miniaturized immunoassay format, suspension bead array-based assays fulfill the criteria of the ambient analyte theory, and our experiments reveal that the analyte concentrations are not significantly changed. The value of sequential multiplex analyte capturing was demonstrated by probing tumor cell line lysates for the abundance of seven different receptor tyrosine kinases and their degree of phosphorylation and by measuring the complex phosphorylation pattern of the epidermal growth factor receptor in the same sample from the same cavity. PMID:20682761
Habitual control of goal selection in humans
Cushman, Fiery; Morris, Adam
2015-01-01
Humans choose actions based on both habit and planning. Habitual control is computationally frugal but adapts slowly to novel circumstances, whereas planning is computationally expensive but can adapt swiftly. Current research emphasizes the competition between habits and plans for behavioral control, yet many complex tasks instead favor their integration. We consider a hierarchical architecture that exploits the computational efficiency of habitual control to select goals while preserving the flexibility of planning to achieve those goals. We formalize this mechanism in a reinforcement learning setting, illustrate its costs and benefits, and experimentally demonstrate its spontaneous application in a sequential decision-making task. PMID:26460050
Positive feedback : exploring current approaches in iterative travel demand model implementation.
DOT National Transportation Integrated Search
2012-01-01
Currently, the models that TxDOTs Transportation Planning and Programming Division (TPP) developed are : traditional three-step models (i.e., trip generation, trip distribution, and traffic assignment) that are sequentially : applied. A limitation...
Sequential Design of Experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson-Cook, Christine Michaela
2017-06-30
A sequential design of experiments strategy is being developed and implemented that allows for adaptive learning based on incoming results as the experiment is being run. The plan is to incorporate these strategies for the NCCC and TCM experimental campaigns to be run in the coming months. This strategy for experimentation has the advantages of allowing new data collected during the experiment to inform future experimental runs based on their projected utility for a particular goal. For example, the current effort for the MEA capture system at NCCC plans to focus on maximally improving the quality of prediction of COmore » 2 capture efficiency as measured by the width of the confidence interval for the underlying response surface that is modeled as a function of 1) Flue Gas Flowrate [1000-3000] kg/hr; 2) CO 2 weight fraction [0.125-0.175]; 3) Lean solvent loading [0.1-0.3], and; 4) Lean solvent flowrate [3000-12000] kg/hr.« less
Lhakhang, Pempa; Gholami, Maryam; Knoll, Nina; Schwarzer, Ralf
2015-01-01
A sequential intervention to facilitate the adoption and maintenance of dental flossing was conducted among 205 students in India, aged 18-26 years. Two experimental groups received different treatment sequences and were observed at three assessment points, 34 days apart. One group received first a motivational intervention (intention, outcome expectancies, and risk perception, followed by a self-regulatory intervention (planning, self-efficacy, and action control). The second group received the same intervention in the opposite order. Both intervention sequences yielded gains in terms of flossing, planning, self-efficacy, and action control. However, at Time 2, those who had received the self-regulatory intervention first, were superior to their counterparts who had received the motivational intervention first. At Time 3, differences vanished as everyone had then received both interventions. Thus, findings highlight the benefits of a self-regulatory compared to a mere motivational intervention.
dos Santos, Luciana B O; Infante, Carlos M C; Masini, Jorge C
2010-03-01
This work describes the development and optimization of a sequential injection method to automate the determination of paraquat by square-wave voltammetry employing a hanging mercury drop electrode. Automation by sequential injection enhanced the sampling throughput, improving the sensitivity and precision of the measurements as a consequence of the highly reproducible and efficient conditions of mass transport of the analyte toward the electrode surface. For instance, 212 analyses can be made per hour if the sample/standard solution is prepared off-line and the sequential injection system is used just to inject the solution towards the flow cell. In-line sample conditioning reduces the sampling frequency to 44 h(-1). Experiments were performed in 0.10 M NaCl, which was the carrier solution, using a frequency of 200 Hz, a pulse height of 25 mV, a potential step of 2 mV, and a flow rate of 100 µL s(-1). For a concentration range between 0.010 and 0.25 mg L(-1), the current (i(p), µA) read at the potential corresponding to the peak maximum fitted the following linear equation with the paraquat concentration (mg L(-1)): i(p) = (-20.5 ± 0.3)C (paraquat) - (0.02 ± 0.03). The limits of detection and quantification were 2.0 and 7.0 µg L(-1), respectively. The accuracy of the method was evaluated by recovery studies using spiked water samples that were also analyzed by molecular absorption spectrophotometry after reduction of paraquat with sodium dithionite in an alkaline medium. No evidence of statistically significant differences between the two methods was observed at the 95% confidence level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, B.; Li, W.; Wang, G.
2007-07-01
Sequential acid leaching was used to leach minerals and the trace elements they contain. One-step leaching uses concentrated nitric acid as solvent, while three-step leaching uses 5M hydrochloric acid, concentrated hydrofluoric acid, and concentrated hydrochloric acid as solvents. The sequential acid leaching by three-and one-step leach was also examined. The results showed that one-step leaching could leach over 80% of arsenic from coal samples, and also could leach mercury to a certain degree. During one-step leaching, little chromium is removed, but it is available to leach by three-step leaching; and during the sequential acid leaching by three and one-step leaching,more » almost 98% ash is leached. The result of acid leaching could also give detailed information on mode of occurrence of As, Cr, and Hg, which could be classified into: silicate association, pyrite association, organic association, and carbonates and sulfates association. Over half of chromium in the three coals is associated with organic matters and the rest is associated with silicates. The mode of occurrence of arsenic and mercury is mainly associated with different mineral matters depending on the coal samples studied.« less
Task planning and control synthesis for robotic manipulation in space applications
NASA Technical Reports Server (NTRS)
Sanderson, A. C.; Peshkin, M. A.; Homem-De-mello, L. S.
1987-01-01
Space-based robotic systems for diagnosis, repair and assembly of systems will require new techniques of planning and manipulation to accomplish these complex tasks. Results of work in assembly task representation, discrete task planning, and control synthesis which provide a design environment for flexible assembly systems in manufacturing applications, and which extend to planning of manipulatiuon operations in unstructured environments are summarized. Assembly planning is carried out using the AND/OR graph representation which encompasses all possible partial orders of operations and may be used to plan assembly sequences. Discrete task planning uses the configuration map which facilitates search over a space of discrete operations parameters in sequential operations in order to achieve required goals in the space of bounded configuration sets.
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J
2015-01-27
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05-1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed.
Pliego, Jorge; Mateos, Juan Carlos; Rodriguez, Jorge; Valero, Francisco; Baeza, Mireia; Femat, Ricardo; Camacho, Rosa; Sandoval, Georgina; Herrera-López, Enrique J.
2015-01-01
Lipases and esterases are biocatalysts used at the laboratory and industrial level. To obtain the maximum yield in a bioprocess, it is important to measure key variables, such as enzymatic activity. The conventional method for monitoring hydrolytic activity is to take out a sample from the bioreactor to be analyzed off-line at the laboratory. The disadvantage of this approach is the long time required to recover the information from the process, hindering the possibility to develop control systems. New strategies to monitor lipase/esterase activity are necessary. In this context and in the first approach, we proposed a lab-made sequential injection analysis system to analyze off-line samples from shake flasks. Lipase/esterase activity was determined using p-nitrophenyl butyrate as the substrate. The sequential injection analysis allowed us to measure the hydrolytic activity from a sample without dilution in a linear range from 0.05–1.60 U/mL, with the capability to reach sample dilutions up to 1000 times, a sampling frequency of five samples/h, with a kinetic reaction of 5 min and a relative standard deviation of 8.75%. The results are promising to monitor lipase/esterase activity in real time, in which optimization and control strategies can be designed. PMID:25633600
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
A method to estimate the effect of deformable image registration uncertainties on daily dose mapping
Murphy, Martin J.; Salguero, Francisco J.; Siebers, Jeffrey V.; Staub, David; Vaman, Constantin
2012-01-01
Purpose: To develop a statistical sampling procedure for spatially-correlated uncertainties in deformable image registration and then use it to demonstrate their effect on daily dose mapping. Methods: Sequential daily CT studies are acquired to map anatomical variations prior to fractionated external beam radiotherapy. The CTs are deformably registered to the planning CT to obtain displacement vector fields (DVFs). The DVFs are used to accumulate the dose delivered each day onto the planning CT. Each DVF has spatially-correlated uncertainties associated with it. Principal components analysis (PCA) is applied to measured DVF error maps to produce decorrelated principal component modes of the errors. The modes are sampled independently and reconstructed to produce synthetic registration error maps. The synthetic error maps are convolved with dose mapped via deformable registration to model the resulting uncertainty in the dose mapping. The results are compared to the dose mapping uncertainty that would result from uncorrelated DVF errors that vary randomly from voxel to voxel. Results: The error sampling method is shown to produce synthetic DVF error maps that are statistically indistinguishable from the observed error maps. Spatially-correlated DVF uncertainties modeled by our procedure produce patterns of dose mapping error that are different from that due to randomly distributed uncertainties. Conclusions: Deformable image registration uncertainties have complex spatial distributions. The authors have developed and tested a method to decorrelate the spatial uncertainties and make statistical samples of highly correlated error maps. The sample error maps can be used to investigate the effect of DVF uncertainties on daily dose mapping via deformable image registration. An initial demonstration of this methodology shows that dose mapping uncertainties can be sensitive to spatial patterns in the DVF uncertainties. PMID:22320766
Bye, Robin T; Neilson, Peter D
2010-10-01
Physiological tremor during movement is characterized by ∼10 Hz oscillation observed both in the electromyogram activity and in the velocity profile. We propose that this particular rhythm occurs as the direct consequence of a movement response planning system that acts as an intermittent predictive controller operating at discrete intervals of ∼100 ms. The BUMP model of response planning describes such a system. It forms the kernel of Adaptive Model Theory which defines, in computational terms, a basic unit of motor production or BUMP. Each BUMP consists of three processes: (1) analyzing sensory information, (2) planning a desired optimal response, and (3) execution of that response. These processes operate in parallel across successive sequential BUMPs. The response planning process requires a discrete-time interval in which to generate a minimum acceleration trajectory to connect the actual response with the predicted future state of the target and compensate for executional error. We have shown previously that a response planning time of 100 ms accounts for the intermittency observed experimentally in visual tracking studies and for the psychological refractory period observed in double stimulation reaction time studies. We have also shown that simulations of aimed movement, using this same planning interval, reproduce experimentally observed speed-accuracy tradeoffs and movement velocity profiles. Here we show, by means of a simulation study of constant velocity tracking movements, that employing a 100 ms planning interval closely reproduces the measurement discontinuities and power spectra of electromyograms, joint-angles, and angular velocities of physiological tremor reported experimentally. We conclude that intermittent predictive control through sequential operation of BUMPs is a fundamental mechanism of 10 Hz physiological tremor in movement. Copyright © 2010 Elsevier B.V. All rights reserved.
Kong, Anthony Pak-Hin
2009-01-01
Discourse produced by speakers with aphasia contains rich and valuable information for researchers to understand the manifestation of aphasia as well as for clinicians to plan specific treatment components for their clients. Various approaches to investigate aphasic discourse have been proposed in the English literature. However, this is not the case in Chinese. As a result, clinical evaluations of aphasic discourse have not been a common practice. This problem is further compounded by the lack of validated stimuli that are culturally appropriate for language elicitation. The purpose of this study was twofold: (a) to develop and validate four sequential pictorial stimuli for elicitation of language samples in Cantonese speakers with aphasia, and (b) to investigate the use of a main concept measurement, a clinically oriented quantitative system, to analyze the elicited language samples. Twenty speakers with aphasia and ten normal speakers were invited to participate in this study. The aphasic group produced significantly less key information than the normal group. More importantly, a strong relationship was also found between aphasia severity and production of main concepts. While the results of the inter-rater and intra-rater reliability suggested the scoring system to be reliable, the test-retest results yielded strong and significant correlations across two testing sessions one to three weeks apart. Readers will demonstrate better understanding of (1) the development and validation of newly devised sequential pictorial stimuli to elicit oral language production, and (2) the use of a main concept measurement to quantify aphasic connected speech in Cantonese Chinese.
USDA-ARS?s Scientific Manuscript database
Effective Salmonella control in broilers is important from the standpoint of both consumer protection and industry viability. We investigated associations between Salmonella recovery from different sample types collected at sequential stages of one grow-out from the broiler flock and production env...
ERIC Educational Resources Information Center
Bain, Sherry K.
1993-01-01
Analysis of Kaufman Assessment Battery for Children (K-ABC) Sequential and Simultaneous Processing scores of 94 children (ages 6-12) with learning disabilities produced factor patterns generally supportive of the traditional K-ABC Mental Processing structure with the exception of Spatial Memory. The sample exhibited relative processing strengths…
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
Su, Chun-Lung; Gardner, Ian A; Johnson, Wesley O
2004-07-30
The two-test two-population model, originally formulated by Hui and Walter, for estimation of test accuracy and prevalence estimation assumes conditionally independent tests, constant accuracy across populations and binomial sampling. The binomial assumption is incorrect if all individuals in a population e.g. child-care centre, village in Africa, or a cattle herd are sampled or if the sample size is large relative to population size. In this paper, we develop statistical methods for evaluating diagnostic test accuracy and prevalence estimation based on finite sample data in the absence of a gold standard. Moreover, two tests are often applied simultaneously for the purpose of obtaining a 'joint' testing strategy that has either higher overall sensitivity or specificity than either of the two tests considered singly. Sequential versions of such strategies are often applied in order to reduce the cost of testing. We thus discuss joint (simultaneous and sequential) testing strategies and inference for them. Using the developed methods, we analyse two real and one simulated data sets, and we compare 'hypergeometric' and 'binomial-based' inferences. Our findings indicate that the posterior standard deviations for prevalence (but not sensitivity and specificity) based on finite population sampling tend to be smaller than their counterparts for infinite population sampling. Finally, we make recommendations about how small the sample size should be relative to the population size to warrant use of the binomial model for prevalence estimation. Copyright 2004 John Wiley & Sons, Ltd.
Lin, Yu-Pin; Chu, Hone-Jay; Wang, Cheng-Long; Yu, Hsiao-Hsuan; Wang, Yung-Chieh
2009-01-01
This study applies variogram analyses of normalized difference vegetation index (NDVI) images derived from SPOT HRV images obtained before and after the ChiChi earthquake in the Chenyulan watershed, Taiwan, as well as images after four large typhoons, to delineate the spatial patterns, spatial structures and spatial variability of landscapes caused by these large disturbances. The conditional Latin hypercube sampling approach was applied to select samples from multiple NDVI images. Kriging and sequential Gaussian simulation with sufficient samples were then used to generate maps of NDVI images. The variography of NDVI image results demonstrate that spatial patterns of disturbed landscapes were successfully delineated by variogram analysis in study areas. The high-magnitude Chi-Chi earthquake created spatial landscape variations in the study area. After the earthquake, the cumulative impacts of typhoons on landscape patterns depended on the magnitudes and paths of typhoons, but were not always evident in the spatiotemporal variability of landscapes in the study area. The statistics and spatial structures of multiple NDVI images were captured by 3,000 samples from 62,500 grids in the NDVI images. Kriging and sequential Gaussian simulation with the 3,000 samples effectively reproduced spatial patterns of NDVI images. However, the proposed approach, which integrates the conditional Latin hypercube sampling approach, variogram, kriging and sequential Gaussian simulation in remotely sensed images, efficiently monitors, samples and maps the effects of large chronological disturbances on spatial characteristics of landscape changes including spatial variability and heterogeneity.
Songthong, Anussara P; Kannarunimit, Danita; Chakkabat, Chakkapong; Lertbutsayanukul, Chawalit
2015-08-08
To investigate acute and late toxicities comparing sequential (SEQ-IMRT) versus simultaneous integrated boost intensity modulated radiotherapy (SIB-IMRT) in nasopharyngeal carcinoma (NPC) patients. Newly diagnosed stage I-IVB NPC patients were randomized to receive SEQ-IMRT or SIB-IMRT, with or without chemotherapy. SEQ-IMRT consisted of two sequential radiation treatment plans: 2 Gy x 25 fractions to low-risk planning target volume (PTV-LR) followed by 2 Gy x 10 fractions to high-risk planning target volume (PTV-HR). In contrast, SIB-IMRT consisted of only one treatment plan: 2.12 Gy and 1.7 Gy x 33 fractions to PTV-HR and PTV-LR, respectively. Toxicities were evaluated according to CTCAE version 4.0. Between October 2010 and November 2013, 122 eligible patients were randomized between SEQ-IMRT (54 patients) and SIB-IMRT (68 patients). With median follow-up time of 16.8 months, there was no significant difference in toxicities between the two IMRT techniques. During chemoradiation, the most common grade 3-5 acute toxicities were mucositis (15.4% vs 13.6%, SEQ vs SIB, p = 0.788) followed by dysphagia (9.6% vs 9.1%, p = 1.000) and xerostomia (9.6% vs 7.6%, p = 0.748). During the adjuvant chemotherapy period, 25.6% and 32.7% experienced grade 3 weight loss in SEQ-IMRT and SIB-IMRT (p = 0.459). One-year overall survival (OS) and progression-free survival (PFS) were 95.8% and 95.5% in SEQ-IMRT and 98% and 90.2% in SIB-IMRT, respectively (p = 0.472 for OS and 0.069 for PFS). This randomized, phase II/III trial comparing SIB-IMRT versus SEQ-IMRT in NPC showed no statistically significant difference between both IMRT techniques in terms of acute adverse events. Short-term tumor control and survival outcome were promising.
Development of a syringe pump assisted dynamic headspace sampling technique for needle trap device.
Eom, In-Yong; Niri, Vadoud H; Pawliszyn, Janusz
2008-07-04
This paper describes a new approach that combines needle trap devices (NTDs) with a dynamic headspace sampling technique (purge and trap) using a bidirectional syringe pump. The needle trap device is a 22-G stainless steel needle 3.5-in. long packed with divinylbenzene sorbent particles. The same sized needle, without packing, was used for purging purposes. We chose an aqueous mixture of benzene, toluene, ethylbenzene, and p-xylene (BTEX) and developed a sequential purge and trap (SPNT) method, in which sampling (trapping) and purging cycles were performed sequentially by the use of syringe pump with different distribution channels. In this technique, a certain volume (1 mL) of headspace was sequentially sampled using the needle trap; afterwards, the same volume of air was purged into the solution at a high flow rate. The proposed technique showed an effective extraction compared to the continuous purge and trap technique, with a minimal dilution effect. Method evaluation was also performed by obtaining the calibration graphs for aqueous BTEX solutions in the concentration range of 1-250 ng/mL. The developed technique was compared to the headspace solid-phase microextraction method for the analysis of aqueous BTEX samples. Detection limits as low as 1 ng/mL were obtained for BTEX by NTD-SPNT.
Moehler, Tobias; Fiehler, Katja
2015-11-01
Saccade curvature represents a sensitive measure of oculomotor inhibition with saccades curving away from covertly attended locations. Here we investigated whether and how saccade curvature depends on movement preparation time when a perceptual task is performed during or before saccade preparation. Participants performed a dual-task including a visual discrimination task at a cued location and a saccade task to the same location (congruent) or to a different location (incongruent). Additionally, we varied saccade preparation time (time between saccade cue and Go-signal) and the occurrence of the discrimination task (during saccade preparation=simultaneous vs. before saccade preparation=sequential). We found deteriorated perceptual performance in incongruent trials during simultaneous task performance while perceptual performance was unaffected during sequential task performance. Saccade accuracy and precision were deteriorated in incongruent trials during simultaneous and, to a lesser extent, also during sequential task performance. Saccades consistently curved away from covertly attended non-saccade locations. Saccade curvature was unaffected by movement preparation time during simultaneous task performance but decreased and finally vanished with increasing movement preparation time during sequential task performance. Our results indicate that the competing saccade plan to the covertly attended non-saccade location is maintained during simultaneous task performance until the perceptual task is solved while in the sequential condition, in which the discrimination task is solved prior to the saccade task, oculomotor inhibition decays gradually with movement preparation time. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Economou, A.; Tzanavaras, P. D.; Themelis, D. G.
2005-01-01
The sequential-injection analysis (SIA) is an approach to sample handling that enables the automation of manual wet-chemistry procedures in a rapid, precise and efficient manner. The experiments using SIA fits well in the course of Instrumental Chemical Analysis and especially in the section of Automatic Methods of analysis provided by chemistry…
Propagating probability distributions of stand variables using sequential Monte Carlo methods
Jeffrey H. Gove
2009-01-01
A general probabilistic approach to stand yield estimation is developed based on sequential Monte Carlo filters, also known as particle filters. The essential steps in the development of the sampling importance resampling (SIR) particle filter are presented. The SIR filter is then applied to simulated and observed data showing how the 'predictor - corrector'...
ERIC Educational Resources Information Center
Kaufman, Alan S.; Kamphaus, Randy W.
1984-01-01
The construct validity of the Sequential Processing, Simultaneous Processing and Achievement scales of the Kaufman Assessment Battery for Children was supported by factor-analytic investigations of a representative national stratified sample of 2,000 children. Correlations provided insight into the relationship of sequential/simultaneous…
Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J
2009-01-01
Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037
Chen, Connie; Gribble, Matthew O; Bartroff, Jay; Bay, Steven M; Goldstein, Larry
2017-05-01
The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sequential injection spectrophotometric determination of oxybenzone in lipsticks.
Salvador, A; Chisvert, A; Camarasa, A; Pascual-Martí, M C; March, J G
2001-08-01
A sequential injection (SI) procedure for the spectrophotometric determination of oxybenzone in lipsticks is reported. The colorimetric reaction between nickel and oxybenzone was used. SI parameters such as sample solution volume, reagent solution volume, propulsion flow rate and reaction coil length were studied. The limit of detection was 3 microg ml(-1). The sensitivity was 0.0108+/-0.0002 ml microg(-1). The relative standard deviations of the results were between 6 and 12%. The real concentrations of samples and the values obtained by HPLC were comparable. Microwave sample pre-treatment allowed the extraction of oxybenzone with ethanol, thus avoiding the use of toxic organic solvents. Ethanol was also used as carrier in the SI system. Seventy-two injections per hour can be performed, which means a sample frequency of 24 h(-1) if three replicates are measured for each sample.
Modeling sustainability in renewable energy supply chain systems
NASA Astrophysics Data System (ADS)
Xie, Fei
This dissertation aims at modeling sustainability of renewable fuel supply chain systems against emerging challenges. In particular, the dissertation focuses on the biofuel supply chain system design, and manages to develop advanced modeling framework and corresponding solution methods in tackling challenges in sustaining biofuel supply chain systems. These challenges include: (1) to integrate "environmental thinking" into the long-term biofuel supply chain planning; (2) to adopt multimodal transportation to mitigate seasonality in biofuel supply chain operations; (3) to provide strategies in hedging against uncertainty from conversion technology; and (4) to develop methodologies in long-term sequential planning of the biofuel supply chain under uncertainties. All models are mixed integer programs, which also involves multi-objective programming method and two-stage/multistage stochastic programming methods. In particular for the long-term sequential planning under uncertainties, to reduce the computational challenges due to the exponential expansion of the scenario tree, I also developed efficient ND-Max method which is more efficient than CPLEX and Nested Decomposition method. Through result analysis of four independent studies, it is found that the proposed modeling frameworks can effectively improve the economic performance, enhance environmental benefits and reduce risks due to systems uncertainties for the biofuel supply chain systems.
ERIC Educational Resources Information Center
O'Donnell, John F.
1968-01-01
Traditional English curriculums are giving way to new English programs built on the foundations of research and scholarship. The "new" English, being developed by the Project English Centers throughout the country, attempts to utilize the characteristic structure of the subject to plan sequential and spiral curriculums replacing outdated…
Ochiai, Nobuo; Tsunokawa, Jun; Sasamoto, Kikuo; Hoffmann, Andreas
2014-12-05
A novel multi-volatile method (MVM) using sequential dynamic headspace (DHS) sampling for analysis of aroma compounds in aqueous sample was developed. The MVM consists of three different DHS method parameters sets including choice of the replaceable adsorbent trap. The first DHS sampling at 25 °C using a carbon-based adsorbent trap targets very volatile solutes with high vapor pressure (>20 kPa). The second DHS sampling at 25 °C using the same type of carbon-based adsorbent trap targets volatile solutes with moderate vapor pressure (1-20 kPa). The third DHS sampling using a Tenax TA trap at 80 °C targets solutes with low vapor pressure (<1 kPa) and/or hydrophilic characteristics. After the 3 sequential DHS samplings using the same HS vial, the three traps are sequentially desorbed with thermal desorption in reverse order of the DHS sampling and the desorbed compounds are trapped and concentrated in a programmed temperature vaporizing (PTV) inlet and subsequently analyzed in a single GC-MS run. Recoveries of the 21 test aroma compounds for each DHS sampling and the combined MVM procedure were evaluated as a function of vapor pressure in the range of 0.000088-120 kPa. The MVM provided very good recoveries in the range of 91-111%. The method showed good linearity (r2>0.9910) and high sensitivity (limit of detection: 1.0-7.5 ng mL(-1)) even with MS scan mode. The feasibility and benefit of the method was demonstrated with analysis of a wide variety of aroma compounds in brewed coffee. Ten potent aroma compounds from top-note to base-note (acetaldehyde, 2,3-butanedione, 4-ethyl guaiacol, furaneol, guaiacol, 3-methyl butanal, 2,3-pentanedione, 2,3,5-trimethyl pyrazine, vanillin, and 4-vinyl guaiacol) could be identified together with an additional 72 aroma compounds. Thirty compounds including 9 potent aroma compounds were quantified in the range of 74-4300 ng mL(-1) (RSD<10%, n=5). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Sequential growth factor application in bone marrow stromal cell ligament engineering.
Moreau, Jodie E; Chen, Jingsong; Horan, Rebecca L; Kaplan, David L; Altman, Gregory H
2005-01-01
In vitro bone marrow stromal cell (BMSC) growth may be enhanced through culture medium supplementation, mimicking the biochemical environment in which cells optimally proliferate and differentiate. We hypothesize that the sequential administration of growth factors to first proliferate and then differentiate BMSCs cultured on silk fiber matrices will support the enhanced development of ligament tissue in vitro. Confluent second passage (P2) BMSCs obtained from purified bone marrow aspirates were seeded on RGD-modified silk matrices. Seeded matrices were divided into three groups for 5 days of static culture, with medium supplement of basic fibroblast growth factor (B) (1 ng/mL), epidermal growth factor (E; 1 ng/mL), or growth factor-free control (C). After day 5, medium supplementation was changed to transforming growth factor-beta1 (T; 5 ng/mL) or C for an additional 9 days of culture. Real-time RT-PCR, SEM, MTT, histology, and ELISA for collagen type I of all sample groups were performed. Results indicated that BT supported the greatest cell ingrowth after 14 days of culture in addition to the greatest cumulative collagen type I expression measured by ELISA. Sequential growth factor application promoted significant increases in collagen type I transcript expression from day 5 of culture to day 14, for five of six groups tested. All T-supplemented samples surpassed their respective control samples in both cell ingrowth and collagen deposition. All samples supported spindle-shaped, fibroblast cell morphology, aligning with the direction of silk fibers. These findings indicate significant in vitro ligament development after only 14 days of culture when using a sequential growth factor approach.
Colyar, Jessica M; Eggett, Dennis L; Steele, Frost M; Dunn, Michael L; Ogden, Lynn V
2009-09-01
The relative sensitivity of side-by-side and sequential monadic consumer liking protocols was compared. In the side-by-side evaluation, all samples were presented at once and evaluated together 1 characteristic at a time. In the sequential monadic evaluation, 1 sample was presented and evaluated on all characteristics, then returned before panelists received and evaluated another sample. Evaluations were conducted on orange juice, frankfurters, canned chili, potato chips, and applesauce. Five commercial brands, having a broad quality range, were selected as samples for each product category to assure a wide array of consumer liking scores. Without their knowledge, panelists rated the same 5 retail brands by 1 protocol and then 3 wk later by the other protocol. For 3 of the products, both protocols yielded the same order of overall liking. Slight differences in order of overall liking for the other 2 products were not significant. Of the 50 pairwise overall liking comparisons, 44 were in agreement. The different results obtained by the 2 protocols in order of liking and significance of paired comparisons were due to the experimental variation and differences in sensitivity. Hedonic liking scores were subjected to statistical power analyses and used to calculate minimum number of panelists required to achieve varying degrees of sensitivity when using side-by-side and sequential monadic protocols. In most cases, the side-by-side protocol was more sensitive, thus providing the same information with fewer panelists. Side-by-side protocol was less sensitive in cases where sensory fatigue was a factor.
Filleron, Thomas; Gal, Jocelyn; Kramar, Andrew
2012-10-01
A major and difficult task is the design of clinical trials with a time to event endpoint. In fact, it is necessary to compute the number of events and in a second step the required number of patients. Several commercial software packages are available for computing sample size in clinical trials with sequential designs and time to event endpoints, but there are a few R functions implemented. The purpose of this paper is to describe features and use of the R function. plansurvct.func, which is an add-on function to the package gsDesign which permits in one run of the program to calculate the number of events, and required sample size but also boundaries and corresponding p-values for a group sequential design. The use of the function plansurvct.func is illustrated by several examples and validated using East software. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Sequential solvent extraction for forms of antimony in five selected coals
Qi, C.; Liu, Gaisheng; Kong, Y.; Chou, C.-L.; Wang, R.
2008-01-01
Abundance of antimony in bulk samples has been determined in five selected coals, three coals from Huaibei Coalfield, Anhui, China, and two from the Illinois Basin in the United States. The Sb abundance in these samples is in the range of 0.11-0.43 ??g/g. The forms of Sb in coals were studied by sequential solvent extraction. The six forms of Sb are water soluble, ion changeable, organic matter bound, carbonate bound, silicate bound, and sulfide bound. Results of sequential extraction show that silicate-bound Sb is the most abundant form in these coals. Silicate- plus sulfide-bound Sb accounts for more than half of the total Sb in all coals. Bituminous coals are higher in organic matterbound Sb than anthracite and natural coke, indicating that the Sb in the organic matter may be incorporated into silicate and sulfide minerals during metamorphism. ?? 2008 by The University of Chicago. All rights reserved.
von Gunten, Konstantin; Alam, Md Samrat; Hubmann, Magdalena; Ok, Yong Sik; Konhauser, Kurt O; Alessi, Daniel S
2017-07-01
A modified Community Bureau of Reference (CBR) sequential extraction method was tested to assess the composition of untreated pyrogenic carbon (biochar) and oil sands petroleum coke. Wood biochar samples were found to contain lower concentrations of metals, but had higher fractions of easily mobilized alkaline earth and transition metals. Sewage sludge biochar was determined to be less recalcitrant and had higher total metal concentrations, with most of the metals found in the more resilient extraction fractions (oxidizable, residual). Petroleum coke was the most stable material, with a similar metal distribution pattern as the sewage sludge biochar. The applied sequential extraction method represents a suitable technique to recover metals from these materials, and is a valuable tool in understanding the metal retaining and leaching capability of various biochar types and carbonaceous petroleum coke samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
del Río, Vanessa; Larrechi, M Soledad; Callao, M Pilar
2010-06-15
A new concept of flow titration is proposed and demonstrated for the determination of total acidity in plant oils and biodiesel. We use sequential injection analysis (SIA) with a diode array spectrophotometric detector linked to chemometric tools such as multivariate curve resolution-alternating least squares (MCR-ALS). This system is based on the evolution of the basic specie of an acid-base indicator, alizarine, when it comes into contact with a sample that contains free fatty acids. The gradual pH change in the reactor coil due to diffusion and reaction phenomenona allows the sequential appearance of both species of the indicator in the detector coil, recording a data matrix for each sample. The SIA-MCR-ALS method helps to reduce the amounts of sample, the reagents and the time consumed. Each determination consumes 0.413ml of sample, 0.250ml of indicator and 3ml of carrier (ethanol) and generates 3.333ml of waste. The frequency of the analysis is high (12 samples h(-1) including all steps, i.e., cleaning, preparing and analysing). The utilized reagents are of common use in the laboratory and it is not necessary to use the reagents of perfect known concentration. The method was applied to determine acidity in plant oil and biodiesel samples. Results obtained by the proposed method compare well with those obtained by the official European Community method that is time consuming and uses large amounts of organic solvents.
SOS: A Time Management Framework.
ERIC Educational Resources Information Center
Rees, Ruth
1986-01-01
Proposes a time management framework for those working within social service institutions such as schools. Explains three sequential and interdependent groups of recommendations: (1) self-awareness and self-discipline; (2) organizational awareness and a synchronization of both the institution and the individual; and (3) planning, timetabling,…
Outdoor Education Guide-Handbook, Waukesha Public Schools.
ERIC Educational Resources Information Center
Vitale, Joseph A.
Designed by the Waukesha Public Schools (Wisconsin) specifically for an elementary level three-day camping trip at Camp Phantom Lake, this outdoor education guide presents some activities which suggest adaptation. Activity directions, plans, worksheets, evaluation sheets, and illustrations are presented in sequential order for the following…
Rosende, Maria; Savonina, Elena Yu; Fedotov, Petr S; Miró, Manuel; Cerdà, Víctor; Wennrich, Rainer
2009-09-15
Dynamic fractionation has been recognized as an appealing alternative to conventional equilibrium-based sequential extraction procedures (SEPs) for partitioning of trace elements (TE) in environmental solid samples. This paper reports the first attempt for harmonization of flow-through dynamic fractionation using two novel methods, the so-called sequential injection microcolumn (SIMC) extraction and rotating coiled column (RCC) extraction. In SIMC extraction, a column packed with the solid sample is clustered in a sequential injection system, while in RCC, the particulate matter is retained under the action of centrifugal forces. In both methods, the leachants are continuously pumped through the solid substrates by the use of either peristaltic or syringe pumps. A five-step SEP was selected for partitioning of Cu, Pb and Zn in water soluble/exchangeable, acid-soluble, easily reducible, easily oxidizable and moderately reducible fractions from 0.2 to 0.5 g samples at an extractant flow rate of 1.0 mL min(-1) prior to leachate analysis by inductively coupled plasma-atomic emission spectrometry. Similarities and discrepancies between both dynamic approaches were ascertained by fractionation of TE in certified reference materials, namely, SRM 2711 Montana Soil and GBW 07311 sediment, and two real soil samples as well. Notwithstanding the different extraction conditions set by both methods, similar trends of metal distribution were in generally found. The most critical parameters for reliable assessment of mobilizable pools of TE in worse-case scenarios are the size-distribution of sample particles, the density of particles, the content of organic matter and the concentration of major elements. For reference materials and a soil rich in organic matter, the extraction in RCC results in slightly higher recoveries of environmentally relevant fractions of TE, whereas SIMC leaching is more effective for calcareous soils.
ERIC Educational Resources Information Center
Chen, Chin-Chih; McComas, Jennifer J.; Hartman, Ellie; Symons, Frank J.
2011-01-01
Research Findings: In early childhood education, the social ecology of the child is considered critical for healthy behavioral development. There is, however, relatively little information based on directly observing what children do that describes the moment-by-moment (i.e., sequential) relation between physical aggression and peer rejection acts…
Ku, Yixuan; Zhao, Di; Hao, Ning; Hu, Yi; Bodner, Mark; Zhou, Yong-Di
2015-01-01
Both monkey neurophysiological and human EEG studies have shown that association cortices, as well as primary sensory cortical areas, play an essential role in sequential neural processes underlying cross-modal working memory. The present study aims to further examine causal and sequential roles of the primary sensory cortex and association cortex in cross-modal working memory. Individual MRI-based single-pulse transcranial magnetic stimulation (spTMS) was applied to bilateral primary somatosensory cortices (SI) and the contralateral posterior parietal cortex (PPC), while participants were performing a tactile-visual cross-modal delayed matching-to-sample task. Time points of spTMS were 300 ms, 600 ms, 900 ms after the onset of the tactile sample stimulus in the task. The accuracy of task performance and reaction time were significantly impaired when spTMS was applied to the contralateral SI at 300 ms. Significant impairment on performance accuracy was also observed when the contralateral PPC was stimulated at 600 ms. SI and PPC play sequential and distinct roles in neural processes of cross-modal associations and working memory. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tfaily, Malak M.; Chu, Rosalie K.; Toyoda, Jason
A vast number of organic compounds are present in soil organic matter (SOM) and play an important role in the terrestrial carbon cycle, facilitate interactions between organisms, and represent a sink for atmospheric CO2. The diversity of different SOM compounds and their molecular characteristics is a function of the organic source material and biogeochemical history. By understanding how SOM composition changes with sources and the processes by which it is biogeochemically altered in different terrestrial ecosystems, it may be possible to predict nutrient and carbon cycling, response to system perturbations, and impact of climate change will have on SOM composition.more » In this study, a sequential chemical extraction procedure was developed to reveal the diversity of organic matter (OM) in different ecosystems and was compared to the previously published protocol using parallel solvent extraction (PSE). We compared six extraction methods using three sample types, peat soil, spruce forest soil and river sediment, so as to select the best method for extracting a representative fraction of organic matter from soils and sediments from a wide range of ecosystems. We estimated the extraction yield of dissolved organic carbon (DOC) by total organic carbon analysis, and measured the composition of extracted OM using high resolution mass spectrometry. This study showed that OM composition depends primarily on soil and sediment characteristics. Two sequential extraction protocols, progressing from polar to non-polar solvents, were found to provide the highest number and diversity of organic compounds extracted from the soil and sediments. Water (H2O) is the first solvent used for both protocols followed by either co-extraction with methanol-chloroform (MeOH-CHCl3) mixture, or acetonitrile (ACN) and CHCl3 sequentially. The sequential extraction protocol developed in this study offers improved sensitivity, and requires less sample compared to the PSE workflow where a new sample is used for each solvent type. Furthermore, a comparison of SOM composition from the different sample types revealed that our sequential protocol allows for ecosystem comparisons based on the diversity of compounds present, which in turn could provide new insights about source and processing of organic compounds in different soil and sediment types.« less
Tfaily, Malak M; Chu, Rosalie K; Toyoda, Jason; Tolić, Nikola; Robinson, Errol W; Paša-Tolić, Ljiljana; Hess, Nancy J
2017-06-15
A vast number of organic compounds are present in soil organic matter (SOM) and play an important role in the terrestrial carbon cycle, facilitate interactions between organisms, and represent a sink for atmospheric CO 2 . The diversity of different SOM compounds and their molecular characteristics is a function of the organic source material and biogeochemical history. By understanding how SOM composition changes with sources and the processes by which it is biogeochemically altered in different terrestrial ecosystems, it may be possible to predict nutrient and carbon cycling, response to system perturbations, and impact of climate change will have on SOM composition. In this study, a sequential chemical extraction procedure was developed to reveal the diversity of organic matter (OM) in different ecosystems and was compared to the previously published protocol using parallel solvent extraction (PSE). We compared six extraction methods using three sample types, peat soil, spruce forest soil and river sediment, so as to select the best method for extracting a representative fraction of organic matter from soils and sediments from a wide range of ecosystems. We estimated the extraction yield of dissolved organic carbon (DOC) by total organic carbon analysis, and measured the composition of extracted OM using high resolution mass spectrometry. This study showed that OM composition depends primarily on soil and sediment characteristics. Two sequential extraction protocols, progressing from polar to non-polar solvents, were found to provide the highest number and diversity of organic compounds extracted from the soil and sediments. Water (H 2 O) is the first solvent used for both protocols followed by either co-extraction with methanol-chloroform (MeOH-CHCl 3 ) mixture, or acetonitrile (ACN) and CHCl 3 sequentially. The sequential extraction protocol developed in this study offers improved sensitivity, and requires less sample compared to the PSE workflow where a new sample is used for each solvent type. Furthermore, a comparison of SOM composition from the different sample types revealed that our sequential protocol allows for ecosystem comparisons based on the diversity of compounds present, which in turn could provide new insights about source and processing of organic compounds in different soil and sediment types. Copyright © 2017 Elsevier B.V. All rights reserved.
Automation of a flocculation test for syphilis on Groupamatic equipment.
Garretta, M; Paris-Hamelin, A; Gener, J; Muller, A; Matte, C; Vaisman, A
1975-01-01
A flocculation reaction employing a cardiolipid antigen was used for syphilis screening on Groupamatic equipment in parallel with conventional screening reactions: Kolmer CF, RPCF, Kahn, Kline, and RPR. The positive samples were confirmed by FTA-200, FTA-ABS, TPI, and in some cases by TPHA. There were 5,212 known samples which had already been tested by all methods and of which 1,648 were positive, and 58,636 screened samples including 65 positives. Half of the samples in the first series were taken without anticoagulant; the remainder were collected in potassium EDTA. The percentage of false positives with the Groupamatic was about 1-4 per cent. The percentage of false negatives among positve (greater than or equal+) samples varied from 0-18 to 1-3 per cent.; on the other hand the sensitivity was less good for samples giving doubtful and/or dissociated reactions in conventional screening reactions. The specificity and sensitivity of this technique are acceptable for a blood transfusion centre. The reproducibility is excellent and the automatic reading of results accurate. Additional advantages are rapidity (340 samples processed per hour); simultaneous performance of eleven other immunohaematological reactions; no contamination between samples; automatic reading, interpretation, and print-out of results; and saving of time because samples are not filed sequentially and are automatically identified when the results are obtained. Although the importance of syphilis in blood transfusion seems small, estimates of the risk are difficult and further investigations are planned. Images PMID:1098731
Electronic and Vibrational Spectra of InP Quantum Dots Formed by Sequential Ion Implantation
NASA Technical Reports Server (NTRS)
Hall, C.; Mu, R.; Tung, Y. S.; Ueda, A.; Henderson, D. O.; White, C. W.
1997-01-01
We have performed sequential ion implantation of indium and phosphorus into silica combined with controlled thermal annealing to fabricate InP quantum dots in a dielectric host. Electronic and vibrational spectra were measured for the as-implanted and annealed samples. The annealed samples show a peak in the infrared spectra near 320/cm which is attributed to a surface phonon mode and is in good agreement with the value calculated from Frolich's theory of surface phonon polaritons. The electronic spectra show the development of a band near 390 nm that is attributed to quantum confined InP.
Spatial-dependence recurrence sample entropy
NASA Astrophysics Data System (ADS)
Pham, Tuan D.; Yan, Hong
2018-03-01
Measuring complexity in terms of the predictability of time series is a major area of research in science and engineering, and its applications are spreading throughout many scientific disciplines, where the analysis of physiological signals is perhaps the most widely reported in literature. Sample entropy is a popular measure for quantifying signal irregularity. However, the sample entropy does not take sequential information, which is inherently useful, into its calculation of sample similarity. Here, we develop a method that is based on the mathematical principle of the sample entropy and enables the capture of sequential information of a time series in the context of spatial dependence provided by the binary-level co-occurrence matrix of a recurrence plot. Experimental results on time-series data of the Lorenz system, physiological signals of gait maturation in healthy children, and gait dynamics in Huntington's disease show the potential of the proposed method.
Lawson, Chris A
2014-07-01
Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.
Two-step sequential pretreatment for the enhanced enzymatic hydrolysis of coffee spent waste.
Ravindran, Rajeev; Jaiswal, Swarna; Abu-Ghannam, Nissreen; Jaiswal, Amit K
2017-09-01
In the present study, eight different pretreatments of varying nature (physical, chemical and physico-chemical) followed by a sequential, combinatorial pretreatment strategy was applied to spent coffee waste to attain maximum sugar yield. Pretreated samples were analysed for total reducing sugar, individual sugars and generation of inhibitory compounds such as furfural and hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity. Native spent coffee waste was high in hemicellulose content. Galactose was found to be the predominant sugar in spent coffee waste. Results showed that sequential pretreatment yielded 350.12mg of reducing sugar/g of substrate, which was 1.7-fold higher than in native spent coffee waste (203.4mg/g of substrate). Furthermore, extensive delignification was achieved using sequential pretreatment strategy. XRD, FTIR, and DSC profiles of the pretreated substrates were studied to analyse the various changes incurred in sequentially pretreated spent coffee waste as opposed to native spent coffee waste. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOT National Transportation Integrated Search
2011-01-01
Travel demand modeling plays a key role in the transportation system planning and evaluation process. The four-step sequential travel demand model is the most widely used technique in practice. Traffic assignment is the key step in the conventional f...
A Guide to Curriculum Planning in Foreign Language.
ERIC Educational Resources Information Center
Wisconsin State Dept. of Public Instruction, Madison.
A guide designed to help local curriculum planners develop and implement curriculums to provide all students with equal access to foreign languages provides an overview of current philosophies, objectives, methods, materials, and equipment and a guide to sequential program development, articulation, and evaluation. An introductory section…
A heuristic for landscape management
Martín Alfonso B. Mendoza; Jesús S. Zepeta; Juan José A. Fajardo
2006-01-01
The development of landscape ecology has stressed out the importance of spatial and sequential relationships as explanations to forest stand dynamics, and for other natural ambiences. This presentation offers a specific design that introduces spatial considerations into forest planning with the idea of regulating fragmentation and connectivity in commercial forest...
Cluster: Carpentry. Course: Carpentry. Research Project.
ERIC Educational Resources Information Center
Sanford - Lee County Schools, NC.
The course on carpentry is divided into 14 sequential units, with several task packages within each, covering the following topics: carpentry hand tools; portable power tools; working machine tools; lumber; fasteners and adhesives; plans, specifications, and codes for houses; footings and foundations for a house; household cabinets; floor framing…
Concept Marketing of Educational Products and Services--The RDAS Approach.
ERIC Educational Resources Information Center
Green, Robert G.
The Relating, Discovering, Advocating, and Supporting (RDAS) notion of concept selling is applied to the marketing of products and services emanating from educational research and development. The four RDAS categories are further divided into 12 sequential and interdependent tasks: Client Identification; Fact Finding; Planning; Establishing…
The Relationship between Digit Span and Cognitive Processing Across Ability Groups.
ERIC Educational Resources Information Center
Schofield, Neville J.; Ashman, Adrian F.
1986-01-01
The relationship between forward and backward digit span and basic cognitive processes was examined. Subjects were administered measures of sequential processing, simultaneous processing, and planning. Correlational analyses indicated the serial processing character of forward digit span, and the relationship between backward digit span and…
Retrospective robustness of the continual reassessment method.
O'Quigley, John; Zohar, Sarah
2010-09-01
We study model sensitivity of the continual reassessment method (CRM). The context is that of dose-finding designs where certain design parameters are fixed by the investigator. Although our focus is on the CRM (O'Quigley et al., 1990), the essential ideas can be applied to any sequential dose-finding method. It is expected that different choices of a model family and particular parameterizations will have an impact on performance. Assuming that the constraints outlined in Shen and O'Quigley (1996) are respected, large sample performance is unaffected. However small sample performance will be affected by these choices, which are to some degree arbitrary. This work focuses on the retrospective robustness of the CRM in practice. The question is not of a general theoretical nature where, in the background, we would want to consider large numbers of true potential situations. Instead, the question is raised in the specific context of any actual completed study and is the following: Would we have come to the same conclusion concerning the MTD had we worked with a design specified differently? The sequential nature of the CRM means that this question cannot be answered in any definitive way. We can, though, by appealing to the retrospective CRM (O'Quigley, 2005), provide consistent estimates of the relationships between the MTD and the chosen model. If these estimates suggest that changes in different family model parameters will be accompanied by changes in final recommendation, then we would not be confident in the reliability of the estimated MTD and more work would be needed. Also, of course, at the planning stage, prospective robustness could be studied by simulating trials using particular models and parameterizations.
The possibility of application of spiral brain computed tomography to traumatic brain injury.
Lim, Daesung; Lee, Soo Hoon; Kim, Dong Hoon; Choi, Dae Seub; Hong, Hoon Pyo; Kang, Changwoo; Jeong, Jin Hee; Kim, Seong Chun; Kang, Tae-Sin
2014-09-01
The spiral computed tomography (CT) with the advantage of low radiation dose, shorter test time required, and its multidimensional reconstruction is accepted as an essential diagnostic method for evaluating the degree of injury in severe trauma patients and establishment of therapeutic plans. However, conventional sequential CT is preferred for the evaluation of traumatic brain injury (TBI) over spiral CT due to image noise and artifact. We aimed to compare the diagnostic power of spiral facial CT for TBI to that of conventional sequential brain CT. We evaluated retrospectively the images of 315 traumatized patients who underwent both brain CT and facial CT simultaneously. The hemorrhagic traumatic brain injuries such as epidural hemorrhage, subdural hemorrhage, subarachnoid hemorrhage, and contusional hemorrhage were evaluated in both images. Statistics were performed using Cohen's κ to compare the agreement between 2 imaging modalities and sensitivity, specificity, positive predictive value, and negative predictive value of spiral facial CT to conventional sequential brain CT. Almost perfect agreement was noted regarding hemorrhagic traumatic brain injuries between spiral facial CT and conventional sequential brain CT (Cohen's κ coefficient, 0.912). To conventional sequential brain CT, sensitivity, specificity, positive predictive value, and negative predictive value of spiral facial CT were 92.2%, 98.1%, 95.9%, and 96.3%, respectively. In TBI, the diagnostic power of spiral facial CT was equal to that of conventional sequential brain CT. Therefore, expanded spiral facial CT covering whole frontal lobe can be applied to evaluate TBI in the future. Copyright © 2014 Elsevier Inc. All rights reserved.
EVALUATION OF VAPOR EQUILIBRATION AND IMPACT OF PURGE VOLUME ON SOIL-GAS SAMPLING RESULTS
Sequential sampling was utilized at the Raymark Superfund site to evaluate attainment of vapor equilibration and the impact of purge volume on soil-gas sample results. A simple mass-balance equation indicates that removal of three to five internal volumes of a sample system shou...
Vichapong, Jitlada; Burakham, Rodjana; Srijaranai, Supalax; Grudpan, Kate
2011-07-01
A sequential injection-bead injection-lab-on-valve system was hyphenated to HPLC for online renewable micro-solid-phase extraction of carbamate insecticides. The carbamates studied were isoprocarb, methomyl, carbaryl, carbofuran, methiocarb, promecarb, and propoxur. LiChroprep(®) RP-18 beads (25-40 μm) were employed as renewable sorbent packing in a microcolumn situated inside the LOV platform mounted above the multiposition valve of the sequential injection system. The analytes sorbed by the microcolumn were eluted using 80% acetonitrile in 0.1% acetic acid before online introduction to the HPLC system. Separation was performed on an Atlantis C-18 column (4.6 × 150 mm, 5 μm) utilizing gradient elution with a flow rate of 1.0 mL/min and a detection wavelength at 270 nm. The sequential injection system offers the means of performing automated handling of sample preconcentration and matrix removal. The enrichment factors ranged between 20 and 125, leading to limits of detection (LODs) in the range of 1-20 μg/L. Good reproducibility was obtained with relative standard deviations of <0.7 and 5.4% for retention time and peak area, respectively. The developed method has been successfully applied to the determination of carbamate residues in fruit, vegetable, and water samples. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gaudry, Adam J; Nai, Yi Heng; Guijt, Rosanne M; Breadmore, Michael C
2014-04-01
A dual-channel sequential injection microchip capillary electrophoresis system with pressure-driven injection is demonstrated for simultaneous separations of anions and cations from a single sample. The poly(methyl methacrylate) (PMMA) microchips feature integral in-plane contactless conductivity detection electrodes. A novel, hydrodynamic "split-injection" method utilizes background electrolyte (BGE) sheathing to gate the sample flows, while control over the injection volume is achieved by balancing hydrodynamic resistances using external hydrodynamic resistors. Injection is realized by a unique flow-through interface, allowing for automated, continuous sampling for sequential injection analysis by microchip electrophoresis. The developed system was very robust, with individual microchips used for up to 2000 analyses with lifetimes limited by irreversible blockages of the microchannels. The unique dual-channel geometry was demonstrated by the simultaneous separation of three cations and three anions in individual microchannels in under 40 s with limits of detection (LODs) ranging from 1.5 to 24 μM. From a series of 100 sequential injections the %RSDs were determined for every fifth run, resulting in %RSDs for migration times that ranged from 0.3 to 0.7 (n = 20) and 2.3 to 4.5 for peak area (n = 20). This system offers low LODs and a high degree of reproducibility and robustness while the hydrodynamic injection eliminates electrokinetic bias during injection, making it attractive for a wide range of rapid, sensitive, and quantitative online analytical applications.
A novel approach for small sample size family-based association studies: sequential tests.
Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan
2011-08-01
In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.
Automatic sequential fluid handling with multilayer microfluidic sample isolated pumping
Liu, Jixiao; Fu, Hai; Yang, Tianhang; Li, Songjing
2015-01-01
To sequentially handle fluids is of great significance in quantitative biology, analytical chemistry, and bioassays. However, the technological options are limited when building such microfluidic sequential processing systems, and one of the encountered challenges is the need for reliable, efficient, and mass-production available microfluidic pumping methods. Herein, we present a bubble-free and pumping-control unified liquid handling method that is compatible with large-scale manufacture, termed multilayer microfluidic sample isolated pumping (mμSIP). The core part of the mμSIP is the selective permeable membrane that isolates the fluidic layer from the pneumatic layer. The air diffusion from the fluidic channel network into the degassing pneumatic channel network leads to fluidic channel pressure variation, which further results in consistent bubble-free liquid pumping into the channels and the dead-end chambers. We characterize the mμSIP by comparing the fluidic actuation processes with different parameters and a flow rate range of 0.013 μl/s to 0.097 μl/s is observed in the experiments. As the proof of concept, we demonstrate an automatic sequential fluid handling system aiming at digital assays and immunoassays, which further proves the unified pumping-control and suggests that the mμSIP is suitable for functional microfluidic assays with minimal operations. We believe that the mμSIP technology and demonstrated automatic sequential fluid handling system would enrich the microfluidic toolbox and benefit further inventions. PMID:26487904
Mining reflective continuing medical education data for family physician learning needs.
Lewis, Denice Colleen; Pluye, Pierre; Rodriguez, Charo; Grad, Roland
2016-04-06
A mixed methods research (sequential explanatory design) studied the potential of mining the data from the consumers of continuing medical education (CME) programs, for the developers of CME programs. The quantitative data generated by family physicians, through applying the information assessment method to CME content, was presented to key informants from the CME planning community through a qualitative description study.The data were revealed to have many potential applications including supporting the creation of CME content, CME program planning and personal learning portfolios.
Li, Bo; Li, Hao; Dong, Li; Huang, Guofu
2017-11-01
In this study, we sought to investigate the feasibility of fast carotid artery MR angiography (MRA) by combining three-dimensional time-of-flight (3D TOF) with compressed sensing method (CS-3D TOF). A pseudo-sequential phase encoding order was developed for CS-3D TOF to generate hyper-intense vessel and suppress background tissues in under-sampled 3D k-space. Seven healthy volunteers and one patient with carotid artery stenosis were recruited for this study. Five sequential CS-3D TOF scans were implemented at 1, 2, 3, 4 and 5-fold acceleration factors for carotid artery MRA. Blood signal-to-tissue ratio (BTR) values for fully-sampled and under-sampled acquisitions were calculated and compared in seven subjects. Blood area (BA) was measured and compared between fully sampled acquisition and each under-sampled one. There were no significant differences between the fully-sampled dataset and each under-sampled in BTR comparisons (P>0.05 for all comparisons). The carotid vessel BAs measured from the images of CS-3D TOF sequences with 2, 3, 4 and 5-fold acceleration scans were all highly correlated with that of the fully-sampled acquisition. The contrast between blood vessels and background tissues of the images at 2 to 5-fold acceleration is comparable to that of fully sampled images. The images at 2× to 5× exhibit the comparable lumen definition to the corresponding images at 1×. By combining the pseudo-sequential phase encoding order, CS reconstruction, and 3D TOF sequence, this technique provides excellent visualizations for carotid vessel and calcifications in a short scan time. It has the potential to be integrated into current multiple blood contrast imaging protocol. Copyright © 2017. Published by Elsevier Inc.
Structural drift: the population dynamics of sequential learning.
Crutchfield, James P; Whalen, Sean
2012-01-01
We introduce a theory of sequential causal inference in which learners in a chain estimate a structural model from their upstream "teacher" and then pass samples from the model to their downstream "student". It extends the population dynamics of genetic drift, recasting Kimura's selectively neutral theory as a special case of a generalized drift process using structured populations with memory. We examine the diffusion and fixation properties of several drift processes and propose applications to learning, inference, and evolution. We also demonstrate how the organization of drift process space controls fidelity, facilitates innovations, and leads to information loss in sequential learning with and without memory.
NASA Astrophysics Data System (ADS)
Oliveira, J. M.; Carvalho, F. P.
2006-01-01
A sequential extraction technique was developed and tested for common naturally-occurring radionuclides. This technique allows the extraction and purification of uranium, thorium, radium, lead, and polonium radionuclides from the same sample. Environmental materials such as water, soil, and biological samples can be analyzed for those radionuclides without matrix interferences in the quality of radioelement purification and in the radiochemical yield. The use of isotopic tracers (232U, 229Th, 224Ra, 209Po, and stable lead carrier) added to the sample in the beginning of the chemical procedure, enables an accurate control of the radiochemical yield for each radioelement. The ion extraction procedure, applied after either complete dissolution of the solid sample with mineral acids or co-precipitation of dissolved radionuclide with MnO2 for aqueous samples, includes the use of commercially available pre-packed columns from Eichrom® and ion exchange columns packed with Bio-Rad resins, in altogether three chromatography columns. All radioactive elements but one are purified and electroplated on stainless steel discs. Polonium is spontaneously plated on a silver disc. The discs are measured using high resolution silicon surface barrier detectors. 210Pb, a beta emitter, can be measured either through the beta emission of 210Bi, or stored for a few months and determined by alpha spectrometry through the in-growth of 210Po. This sequential extraction chromatography technique was tested and validated with the analysis of certified reference materials from the IAEA. Reproducibility was tested through repeated analysis of the same homogeneous material (water sample).
Mynatt, Robert; Hale, Shane A; Gill, Ruth M; Plontke, Stefan K; Salt, Alec N
2006-06-01
Local applications of drugs to the inner ear are increasingly being used to treat patients' inner ear disorders. Knowledge of the pharmacokinetics of drugs in the inner ear fluids is essential for a scientific basis for such treatments. When auditory function is of primary interest, the drug's kinetics in scala tympani (ST) must be established. Measurement of drug levels in ST is technically difficult because of the known contamination of perilymph samples taken from the basal cochlear turn with cerebrospinal fluid (CSF). Recently, we reported a technique in which perilymph was sampled from the cochlear apex to minimize the influence of CSF contamination (J. Neurosci. Methods, doi: 10.1016/j.jneumeth.2005.10.008 ). This technique has now been extended by taking smaller fluid samples sequentially from the cochlear apex, which can be used to quantify drug gradients along ST. The sampling and analysis methods were evaluated using an ionic marker, trimethylphenylammonium (TMPA), that was applied to the round window membrane. After loading perilymph with TMPA, 10 1-muL samples were taken from the cochlear apex. The TMPA content of the samples was consistent with the first sample containing perilymph from apical regions and the fourth or fifth sample containing perilymph from the basal turn. TMPA concentration decreased in subsequent samples, as they increasingly contained CSF that had passed through ST. Sample concentration curves were interpreted quantitatively by simulation of the experiment with a finite element model and by an automated curve-fitting method by which the apical-basal gradient was estimated. The study demonstrates that sequential apical sampling provides drug gradient data for ST perilymph while avoiding the major distortions of sample composition associated with basal turn sampling. The method can be used for any substance for which a sensitive assay is available and is therefore of high relevance for the development of preclinical and clinical strategies for local drug delivery to the inner ear.
Mynatt, Robert; Hale, Shane A.; Gill, Ruth M.; Plontke, Stefan K.
2006-01-01
ABSTRACT Local applications of drugs to the inner ear are increasingly being used to treat patients' inner ear disorders. Knowledge of the pharmacokinetics of drugs in the inner ear fluids is essential for a scientific basis for such treatments. When auditory function is of primary interest, the drug's kinetics in scala tympani (ST) must be established. Measurement of drug levels in ST is technically difficult because of the known contamination of perilymph samples taken from the basal cochlear turn with cerebrospinal fluid (CSF). Recently, we reported a technique in which perilymph was sampled from the cochlear apex to minimize the influence of CSF contamination (J. Neurosci. Methods, doi: http://10.1016/j.jneumeth.2005.10.008). This technique has now been extended by taking smaller fluid samples sequentially from the cochlear apex, which can be used to quantify drug gradients along ST. The sampling and analysis methods were evaluated using an ionic marker, trimethylphenylammonium (TMPA), that was applied to the round window membrane. After loading perilymph with TMPA, 10 1-μL samples were taken from the cochlear apex. The TMPA content of the samples was consistent with the first sample containing perilymph from apical regions and the fourth or fifth sample containing perilymph from the basal turn. TMPA concentration decreased in subsequent samples, as they increasingly contained CSF that had passed through ST. Sample concentration curves were interpreted quantitatively by simulation of the experiment with a finite element model and by an automated curve-fitting method by which the apical–basal gradient was estimated. The study demonstrates that sequential apical sampling provides drug gradient data for ST perilymph while avoiding the major distortions of sample composition associated with basal turn sampling. The method can be used for any substance for which a sensitive assay is available and is therefore of high relevance for the development of preclinical and clinical strategies for local drug delivery to the inner ear. PMID:16718612
How Many Children? Dilemmas of Family Planning.
ERIC Educational Resources Information Center
Avgar, Amy
As a follow-up to a conference on Jewish population growth, two focus groups of young couples explored personal factors that motivate childbearing decisions. Couples reported that their decisions about how many children to have evolved sequentially, and depended on specific experiences with the first child and each additional child. Couples…
ERIC Educational Resources Information Center
Ingham, Donald
1995-01-01
Describes a long-term scheme to develop a pond, nature trail, and tree-planting project (in Cornwall, England). The project was designed by teams of students. Plans included a large pond, meadow area, sequential cuttings of school fields to encourage insects, butterfly garden, extensive tree plantings (including a dwindling native species), and a…
Making Health Communication Programs Work. A Planner's Guide.
ERIC Educational Resources Information Center
Arkin, Elaine Bratic
This manual, designed to assist professionals in health and health-related agencies, offers guidance for planning a health communication program about cancer based on social marketing and other principles as well as the experiences of National Cancer Institute staff and other practitioners. The six chapters are arranged by sequentially ordered…
Sequential use of simulation and optimization in analysis and planning
Hans R. Zuuring; Jimmie D. Chew; J. Greg Jones
2000-01-01
Management activities are analyzed at landscape scales employing both simulation and optimization. SIMPPLLE, a stochastic simulation modeling system, is initially applied to assess the risks associated with a specific natural process occurring on the current landscape without management treatments, but with fire suppression. These simulation results are input into...
Sequence and Uniformity in the High School Literature Program.
ERIC Educational Resources Information Center
Sauer, Edwin H.
A good, sequential literature program for secondary school students should deal simultaneously with literary forms, with the chronological development of literature, and with broad themes of human experience. By employing the abundance of teaching aids, texts, and improved foreign translations available today, an imaginatively planned program can…
Simultaneous and Successive Processes and K-ABC.
ERIC Educational Resources Information Center
Das, J. P.
1984-01-01
The article mentions six basic statements about sequential and simultaneous processes which are derived from A. Luria's clinical research. The Kaufman Assessment Battery for Children is then judged in terms of these statements. Suggestions for constructing tests which will entail planning as well as simultaneous and successive measures are…
ISSUES RELATED TO SOLUTION CHEMISTRY IN MERCURY SAMPLING IMPINGERS
Analysis of mercury (Hg) speciation in combustion flue gases is often accomplished in standardized sampling trains in which the sample is passed sequentially through a series of aqueous solutions to capture and separate oxidized Hg (Hg2+) and elemental Hg (Hgo). Such methods incl...
Multilevel Mixture Kalman Filter
NASA Astrophysics Data System (ADS)
Guo, Dong; Wang, Xiaodong; Chen, Rong
2004-12-01
The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.
Bayes factor design analysis: Planning for compelling evidence.
Schönbrodt, Felix D; Wagenmakers, Eric-Jan
2018-02-01
A sizeable literature exists on the use of frequentist power analysis in the null-hypothesis significance testing (NHST) paradigm to facilitate the design of informative experiments. In contrast, there is almost no literature that discusses the design of experiments when Bayes factors (BFs) are used as a measure of evidence. Here we explore Bayes Factor Design Analysis (BFDA) as a useful tool to design studies for maximum efficiency and informativeness. We elaborate on three possible BF designs, (a) a fixed-n design, (b) an open-ended Sequential Bayes Factor (SBF) design, where researchers can test after each participant and can stop data collection whenever there is strong evidence for either [Formula: see text] or [Formula: see text], and (c) a modified SBF design that defines a maximal sample size where data collection is stopped regardless of the current state of evidence. We demonstrate how the properties of each design (i.e., expected strength of evidence, expected sample size, expected probability of misleading evidence, expected probability of weak evidence) can be evaluated using Monte Carlo simulations and equip researchers with the necessary information to compute their own Bayesian design analyses.
Multi-arm group sequential designs with a simultaneous stopping rule.
Urach, S; Posch, M
2016-12-30
Multi-arm group sequential clinical trials are efficient designs to compare multiple treatments to a control. They allow one to test for treatment effects already in interim analyses and can have a lower average sample number than fixed sample designs. Their operating characteristics depend on the stopping rule: We consider simultaneous stopping, where the whole trial is stopped as soon as for any of the arms the null hypothesis of no treatment effect can be rejected, and separate stopping, where only recruitment to arms for which a significant treatment effect could be demonstrated is stopped, but the other arms are continued. For both stopping rules, the family-wise error rate can be controlled by the closed testing procedure applied to group sequential tests of intersection and elementary hypotheses. The group sequential boundaries for the separate stopping rule also control the family-wise error rate if the simultaneous stopping rule is applied. However, we show that for the simultaneous stopping rule, one can apply improved, less conservative stopping boundaries for local tests of elementary hypotheses. We derive corresponding improved Pocock and O'Brien type boundaries as well as optimized boundaries to maximize the power or average sample number and investigate the operating characteristics and small sample properties of the resulting designs. To control the power to reject at least one null hypothesis, the simultaneous stopping rule requires a lower average sample number than the separate stopping rule. This comes at the cost of a lower power to reject all null hypotheses. Some of this loss in power can be regained by applying the improved stopping boundaries for the simultaneous stopping rule. The procedures are illustrated with clinical trials in systemic sclerosis and narcolepsy. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Piatak, N.M.; Seal, R.R.; Sanzolone, R.F.; Lamothe, P.J.; Brown, Z.A.
2006-01-01
We report the preliminary results of sequential partial dissolutions used to characterize the geochemical distribution of selenium in stream sediments, mine wastes, and flotation-mill tailings. In general, extraction schemes are designed to extract metals associated with operationally defined solid phases. Total Se concentrations and the mineralogy of the samples are also presented. Samples were obtained from the Elizabeth, Ely, and Pike Hill mines in Vermont, the Callahan mine in Maine, and the Martha mine in New Zealand. These data are presented here with minimal interpretation or discussion. Further analysis of the data will be presented elsewhere.
Stefan-van Staden, Raluca-Ioana; Bokretsion, Rahel Girmai; van Staden, Jacobus F; Aboul-Enein, Hassan Y
2006-01-01
Carbon paste based biosensors for the determination of creatine and creatinine have been integrated into a sequential injection system. Applying the multi-enzyme sequence of creatininase (CA), and/or creatinase (CI) and sarcosine oxidase (SO), hydrogen peroxide has been detected amperometrically. The linear concentration ranges are of pmol/L to nmol/L magnitude, with very low limits of detection. The proposed SIA system can be utilized reliably for the on-line simultaneous detection of creatine and creatinine in pharmaceutical products, as well as in serum samples, with a rate of 34 samples per hour and RSD values better than 0.16% (n=10).
Sequential Sampling Models in Cognitive Neuroscience: Advantages, Applications, and Extensions.
Forstmann, B U; Ratcliff, R; Wagenmakers, E-J
2016-01-01
Sequential sampling models assume that people make speeded decisions by gradually accumulating noisy information until a threshold of evidence is reached. In cognitive science, one such model--the diffusion decision model--is now regularly used to decompose task performance into underlying processes such as the quality of information processing, response caution, and a priori bias. In the cognitive neurosciences, the diffusion decision model has recently been adopted as a quantitative tool to study the neural basis of decision making under time pressure. We present a selective overview of several recent applications and extensions of the diffusion decision model in the cognitive neurosciences.
Radiation detection method and system using the sequential probability ratio test
Nelson, Karl E [Livermore, CA; Valentine, John D [Redwood City, CA; Beauchamp, Brock R [San Ramon, CA
2007-07-17
A method and system using the Sequential Probability Ratio Test to enhance the detection of an elevated level of radiation, by determining whether a set of observations are consistent with a specified model within a given bounds of statistical significance. In particular, the SPRT is used in the present invention to maximize the range of detection, by providing processing mechanisms for estimating the dynamic background radiation, adjusting the models to reflect the amount of background knowledge at the current point in time, analyzing the current sample using the models to determine statistical significance, and determining when the sample has returned to the expected background conditions.
Silverman, Rachel K; Ivanova, Anastasia
2017-01-01
Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.
Che, W W; Frey, H Christopher; Lau, Alexis K H
2016-08-16
A sequential measurement method is demonstrated for quantifying the variability in exposure concentration during public transportation. This method was applied in Hong Kong by measuring PM2.5 and CO concentrations along a route connecting 13 transportation-related microenvironments within 3-4 h. The study design takes into account ventilation, proximity to local sources, area-wide air quality, and meteorological conditions. Portable instruments were compacted into a backpack to facilitate measurement under crowded transportation conditions and to quantify personal exposure by sampling at nose level. The route included stops next to three roadside monitors to enable comparison of fixed site and exposure concentrations. PM2.5 exposure concentrations were correlated with the roadside monitors, despite differences in averaging time, detection method, and sampling location. Although highly correlated in temporal trend, PM2.5 concentrations varied significantly among microenvironments, with mean concentration ratios versus roadside monitor ranging from 0.5 for MTR train to 1.3 for bus terminal. Measured inter-run variability provides insight regarding the sample size needed to discriminate between microenvironments with increased statistical significance. The study results illustrate the utility of sequential measurement of microenvironments and policy-relevant insights for exposure mitigation and management.
R. L. Czaplewski
2009-01-01
The minimum variance multivariate composite estimator is a relatively simple sequential estimator for complex sampling designs (Czaplewski 2009). Such designs combine a probability sample of expensive field data with multiple censuses and/or samples of relatively inexpensive multi-sensor, multi-resolution remotely sensed data. Unfortunately, the multivariate composite...
Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser
Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech; ...
2015-11-27
In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less
Sequential single shot X-ray photon correlation spectroscopy at the SACLA free electron laser
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehmkühler, Felix; Kwaśniewski, Paweł; Roseker, Wojciech
In this study, hard X-ray free electron lasers allow for the first time to access dynamics of condensed matter samples ranging from femtoseconds to several hundred seconds. In particular, the exceptional large transverse coherence of the X-ray pulses and the high time-averaged flux promises to reach time and length scales that have not been accessible up to now with storage ring based sources. However, due to the fluctuations originating from the stochastic nature of the self-amplified spontaneous emission (SASE) process the application of well established techniques such as X-ray photon correlation spectroscopy (XPCS) is challenging. Here we demonstrate a single-shotmore » based sequential XPCS study on a colloidal suspension with a relaxation time comparable to the SACLA free-electron laser pulse repetition rate. High quality correlation functions could be extracted without any indications for sample damage. This opens the way for systematic sequential XPCS experiments at FEL sources.« less
Win-Stay, Lose-Sample: a simple sequential algorithm for approximating Bayesian inference.
Bonawitz, Elizabeth; Denison, Stephanie; Gopnik, Alison; Griffiths, Thomas L
2014-11-01
People can behave in a way that is consistent with Bayesian models of cognition, despite the fact that performing exact Bayesian inference is computationally challenging. What algorithms could people be using to make this possible? We show that a simple sequential algorithm "Win-Stay, Lose-Sample", inspired by the Win-Stay, Lose-Shift (WSLS) principle, can be used to approximate Bayesian inference. We investigate the behavior of adults and preschoolers on two causal learning tasks to test whether people might use a similar algorithm. These studies use a "mini-microgenetic method", investigating how people sequentially update their beliefs as they encounter new evidence. Experiment 1 investigates a deterministic causal learning scenario and Experiments 2 and 3 examine how people make inferences in a stochastic scenario. The behavior of adults and preschoolers in these experiments is consistent with our Bayesian version of the WSLS principle. This algorithm provides both a practical method for performing Bayesian inference and a new way to understand people's judgments. Copyright © 2014 Elsevier Inc. All rights reserved.
Silva, Ivair R
2018-01-15
Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis. Copyright © 2017 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Ronnlund, Michael; Nilsson, Lars-Goran
2008-01-01
To estimate Flynn effects (FEs) on forms of declarative memory (episodic, semantic) and visuospatial ability (Block Design) time-sequential analyses of data for Swedish adult samples (35-80 years) assessed on either of four occasions (1989, 1994, 1999, 2004; n = 2995) were conducted. The results demonstrated cognitive gains across occasions,…
Algorithms for Large-Scale Astronomical Problems
2013-08-01
implemented as a succession of Hadoop MapReduce jobs and sequential programs written in Java . The sampling and splitting stages are implemented as...one MapReduce job, the partitioning and clustering phases make up another job. The merging stage is implemented as a stand-alone Java program. The...Merging. The merging stage is implemented as a sequential Java program that reads the files with the shell information, which were generated by
Random sequential adsorption of cubes
NASA Astrophysics Data System (ADS)
Cieśla, Michał; Kubala, Piotr
2018-01-01
Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.
What Do Lead and Copper Sampling Protocols Mean, and Which Is Right for You?
this presentation will provide a short review of the explicit and implicit concepts behind most of the currently-used regulatory and diagnostic sampling schemes for lead, such as: random daytime sampling; automated proportional sampler; 30 minute first draw stagnation; Sequential...
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-07
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
NASA Astrophysics Data System (ADS)
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2017-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6 ± 15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size.
Li, Yongbao; Tian, Zhen; Song, Ting; Wu, Zhaoxia; Liu, Yaqiang; Jiang, Steve; Jia, Xun
2016-01-01
Monte Carlo (MC)-based spot dose calculation is highly desired for inverse treatment planning in proton therapy because of its accuracy. Recent studies on biological optimization have also indicated the use of MC methods to compute relevant quantities of interest, e.g. linear energy transfer. Although GPU-based MC engines have been developed to address inverse optimization problems, their efficiency still needs to be improved. Also, the use of a large number of GPUs in MC calculation is not favorable for clinical applications. The previously proposed adaptive particle sampling (APS) method can improve the efficiency of MC-based inverse optimization by using the computationally expensive MC simulation more effectively. This method is more efficient than the conventional approach that performs spot dose calculation and optimization in two sequential steps. In this paper, we propose a computational library to perform MC-based spot dose calculation on GPU with the APS scheme. The implemented APS method performs a non-uniform sampling of the particles from pencil beam spots during the optimization process, favoring those from the high intensity spots. The library also conducts two computationally intensive matrix-vector operations frequently used when solving an optimization problem. This library design allows a streamlined integration of the MC-based spot dose calculation into an existing proton therapy inverse planning process. We tested the developed library in a typical inverse optimization system with four patient cases. The library achieved the targeted functions by supporting inverse planning in various proton therapy schemes, e.g. single field uniform dose, 3D intensity modulated proton therapy, and distal edge tracking. The efficiency was 41.6±15.3% higher than the use of a GPU-based MC package in a conventional calculation scheme. The total computation time ranged between 2 and 50 min on a single GPU card depending on the problem size. PMID:27991456
Dosimetric effects of patient rotational setup errors on prostate IMRT treatments
NASA Astrophysics Data System (ADS)
Fu, Weihua; Yang, Yong; Li, Xiang; Heron, Dwight E.; Saiful Huq, M.; Yue, Ning J.
2006-10-01
The purpose of this work is to determine dose delivery errors that could result from systematic rotational setup errors (ΔΦ) for prostate cancer patients treated with three-phase sequential boost IMRT. In order to implement this, different rotational setup errors around three Cartesian axes were simulated for five prostate patients and dosimetric indices, such as dose-volume histogram (DVH), tumour control probability (TCP), normal tissue complication probability (NTCP) and equivalent uniform dose (EUD), were employed to evaluate the corresponding dosimetric influences. Rotational setup errors were simulated by adjusting the gantry, collimator and horizontal couch angles of treatment beams and the dosimetric effects were evaluated by recomputing the dose distributions in the treatment planning system. Our results indicated that, for prostate cancer treatment with the three-phase sequential boost IMRT technique, the rotational setup errors do not have significant dosimetric impacts on the cumulative plan. Even in the worst-case scenario with ΔΦ = 3°, the prostate EUD varied within 1.5% and TCP decreased about 1%. For seminal vesicle, slightly larger influences were observed. However, EUD and TCP changes were still within 2%. The influence on sensitive structures, such as rectum and bladder, is also negligible. This study demonstrates that the rotational setup error degrades the dosimetric coverage of target volume in prostate cancer treatment to a certain degree. However, the degradation was not significant for the three-phase sequential boost prostate IMRT technique and for the margin sizes used in our institution.
Power Distribution System Planning with GIS Consideration
NASA Astrophysics Data System (ADS)
Wattanasophon, Sirichai; Eua-Arporn, Bundhit
This paper proposes a method for solving radial distribution system planning problems taking into account geographical information. The proposed method can automatically determine appropriate location and size of a substation, routing of feeders, and sizes of conductors while satisfying all constraints, i.e. technical constraints (voltage drop and thermal limit) and geographical constraints (obstacle, existing infrastructure, and high-cost passages). Sequential quadratic programming (SQP) and minimum path algorithm (MPA) are applied to solve the planning problem based on net price value (NPV) consideration. In addition this method integrates planner's experience and optimization process to achieve an appropriate practical solution. The proposed method has been tested with an actual distribution system, from which the results indicate that it can provide satisfactory plans.
Sequential responding and planning in capuchin monkeys (Cebus apella).
Beran, Michael J; Parrish, Audrey E
2012-11-01
Previous experiments have assessed planning during sequential responding to computer generated stimuli by Old World nonhuman primates including chimpanzees and rhesus macaques. However, no such assessment has been made with a New World primate species. Capuchin monkeys (Cebus apella) are an interesting test case for assessing the distribution of cognitive processes in the Order Primates because they sometimes show proficiency in tasks also mastered by apes and Old World monkeys, but in other cases fail to match the proficiency of those other species. In two experiments, eight capuchin monkeys selected five arbitrary stimuli in distinct locations on a computer monitor in a learned sequence. In Experiment 1, shift trials occurred in which the second and third stimuli were transposed when the first stimulus was selected by the animal. In Experiment 2, mask trials occurred in which all remaining stimuli were masked after the monkey selected the first stimulus. Monkeys made more mistakes on trials in which the locations of the second and third stimuli were interchanged than on trials in which locations were not interchanged, suggesting they had already planned to select a location that no longer contained the correct stimulus. When mask trials occurred, monkeys performed at levels significantly better than chance, but their performance exceeded chance levels only for the first and the second selections on a trial. These data indicate that capuchin monkeys performed very similarly to chimpanzees and rhesus monkeys and appeared to plan their selection sequences during the computerized task, but only to a limited degree.
Lee, M H; Ahn, H J; Park, J H; Park, Y J; Song, K
2011-02-01
This paper presents a quantitative and rapid method of sequential separation of Pu, (90)Sr and (241)Am nuclides in environmental soil samples with an anion exchange resin and Sr Spec resin. After the sample solution was passed through an anion exchange column connected to a Sr Spec column, Pu isotopes were purified from the anion exchange column. Strontium-90 was separated from other interfering elements by the Sr Spec column. Americium-241 was purified from lanthanides by the anion exchange resin after oxalate co-precipitation. Measurement of Pu and Am isotopes was carried out using an α-spectrometer. Strontium-90 was measured by a low-level liquid scintillation counter. The radiochemical procedure of Pu, (90)Sr and (241)Am nuclides investigated in this study validated by application to IAEA reference materials and environmental soil samples. Copyright © 2010 Elsevier Ltd. All rights reserved.
What the gastroenterologist does all day. A survey of a state society's practice.
Switz, D M
1976-06-01
Members of a state society of gastroenterologist collected information about their pattern of practice. Twenty-two of the 41 members voluntarily kept a list of 25 sequential new patients seen during the spring of 1973. Five hundred and fory-nine diagnoses were accumulated; 369 (67%) of these diagnoses were gastroenterological. The five most common gastroenterological diagnoses were: functional disorder, duodenal ulcer, hiatus hernia, biliary tract disease, and esophagitis. The five most common over-all diagnostic areas were: functional disorder, cardiovascular disease, "other" nongastroenterological diagnoses (including obesity), duodenal ulcer, and endocrine malfunction. Geographically dispersed gastroenterologists in Virginia make more than one-half of their primary diagnoses in the area of their subspecialty interest. The primary gastroenterological problems seen are "upper gut" lesions and biliary tract disease. These observations may be of value in planning education, training, or research activities, especially if verified by a broader sample of gastroenterological practitioners.
Davies, Jeff K; Hassan, Sandra; Sarker, Shah-Jalal; Besley, Caroline; Oakervee, Heather; Smith, Matthew; Taussig, David; Gribben, John G; Cavenagh, Jamie D
2018-02-01
Allogeneic haematopoietic stem-cell transplantation remains the only curative treatment for relapsed/refractory acute myeloid leukaemia (AML) and high-risk myelodysplasia but has previously been limited to patients who achieve remission before transplant. New sequential approaches employing T-cell depleted transplantation directly after chemotherapy show promise but are burdened by viral infection and require donor lymphocyte infusions (DLI) to augment donor chimerism and graft-versus-leukaemia effects. T-replete transplantation in sequential approaches could reduce both viral infection and DLI usage. We therefore performed a single-arm prospective Phase II clinical trial of sequential chemotherapy and T-replete transplantation using reduced-intensity conditioning without planned DLI. The primary endpoint was overall survival. Forty-seven patients with relapsed/refractory AML or high-risk myelodysplasia were enrolled; 43 proceeded to transplantation. High levels of donor chimerism were achieved spontaneously with no DLI. Overall survival of transplanted patients was 45% and 33% at 1 and 3 years. Only one patient developed cytomegalovirus disease. Cumulative incidences of treatment-related mortality and relapse were 35% and 20% at 1 year. Patients with relapsed AML and myelodysplasia had the most favourable outcomes. Late-onset graft-versus-host disease protected against relapse. In conclusion, a T-replete sequential transplantation using reduced-intensity conditioning is feasible for relapsed/refractory AML and myelodysplasia and can deliver graft-versus-leukaemia effects without DLI. © 2017 John Wiley & Sons Ltd.
Pérez Cid, B; Fernández Alborés, A; Fernández Gómez, E; Faliqé López, E
2001-08-01
The conventional three-stage BCR sequential extraction method was employed for the fractionation of heavy metals in sewage sludge samples from an urban wastewater treatment plant and from an olive oil factory. The results obtained for Cu, Cr, Ni, Pb and Zn in these samples were compared with those attained by a simplified extraction procedure based on microwave single extractions and using the same reagents as employed in each individual BCR fraction. The microwave operating conditions in the single extractions (heating time and power) were optimized for all the metals studied in order to achieve an extraction efficiency similar to that of the conventional BCR procedure. The measurement of metals in the extracts was carried out by flame atomic absorption spectrometry. The results obtained in the first and third fractions by the proposed procedure were, for all metals, in good agreement with those obtained using the BCR sequential method. Although in the reducible fraction the extraction efficiency of the accelerated procedure was inferior to that of the conventional method, the overall metals leached by both microwave single and sequential extractions were basically the same (recoveries between 90.09 and 103.7%), except for Zn in urban sewage sludges where an extraction efficiency of 87% was achieved. Chemometric analysis showed a good correlation between the results given by the two extraction methodologies compared. The application of the proposed approach to a certified reference material (CRM-601) also provided satisfactory results in the first and third fractions, as it was observed for the sludge samples analysed.
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
Numerical study on the sequential Bayesian approach for radioactive materials detection
NASA Astrophysics Data System (ADS)
Qingpei, Xiang; Dongfeng, Tian; Jianyu, Zhu; Fanhua, Hao; Ge, Ding; Jun, Zeng
2013-01-01
A new detection method, based on the sequential Bayesian approach proposed by Candy et al., offers new horizons for the research of radioactive detection. Compared with the commonly adopted detection methods incorporated with statistical theory, the sequential Bayesian approach offers the advantages of shorter verification time during the analysis of spectra that contain low total counts, especially in complex radionuclide components. In this paper, a simulation experiment platform implanted with the methodology of sequential Bayesian approach was developed. Events sequences of γ-rays associating with the true parameters of a LaBr3(Ce) detector were obtained based on an events sequence generator using Monte Carlo sampling theory to study the performance of the sequential Bayesian approach. The numerical experimental results are in accordance with those of Candy. Moreover, the relationship between the detection model and the event generator, respectively represented by the expected detection rate (Am) and the tested detection rate (Gm) parameters, is investigated. To achieve an optimal performance for this processor, the interval of the tested detection rate as a function of the expected detection rate is also presented.
Electromagnetic-induction logging to monitor changing chloride concentrations
Metzger, Loren F.; Izbicki, John A.
2013-01-01
Water from the San Joaquin Delta, having chloride concentrations up to 3590 mg/L, has intruded fresh water aquifers underlying Stockton, California. Changes in chloride concentrations at depth within these aquifers were evaluated using sequential electromagnetic (EM) induction logs collected during 2004 through 2007 at seven multiple-well sites as deep as 268 m. Sequential EM logging is useful for identifying changes in groundwater quality through polyvinyl chloride-cased wells in intervals not screened by wells. These unscreened intervals represent more than 90% of the aquifer at the sites studied. Sequential EM logging suggested degrading groundwater quality in numerous thin intervals, typically between 1 and 7 m in thickness, especially in the northern part of the study area. Some of these intervals were unscreened by wells, and would not have been identified by traditional groundwater sample collection. Sequential logging also identified intervals with improving water quality—possibly due to groundwater management practices that have limited pumping and promoted artificial recharge. EM resistivity was correlated with chloride concentrations in sampled wells and in water from core material. Natural gamma log data were used to account for the effect of aquifer lithology on EM resistivity. Results of this study show that a sequential EM logging is useful for identifying and monitoring the movement of high-chloride water, having lower salinities and chloride concentrations than sea water, in aquifer intervals not screened by wells, and that increases in chloride in water from wells in the area are consistent with high-chloride water originating from the San Joaquin Delta rather than from the underlying saline aquifer.
Anxiety and Self-Efficacy as Sequential Mediators in US College Students' Career Preparation
ERIC Educational Resources Information Center
Deer, LillyBelle K.; Gohn, Kelsey; Kanaya, Tomoe
2018-01-01
Purpose: Current college students in the USA are reporting higher levels of anxiety over career planning than previous generations, placing pressure on colleges to provide effective career development opportunities for their students. Research has consistently found that increasing career-related self-efficacy is particularly effective at…
Metropolitan open-space protection with uncertain site availability
Robert G. Haight; Stephanie A. Snyder; Charles S. Revelle
2005-01-01
Urban planners acquire open space to protect natural areas and provide public access to recreation opportunities. Because of limited budgets and dynamic land markets, acquisitions take place sequentially depending on available funds and sites. To address these planning features, we formulated a two-period site selection model with two objectives: maximize the...
Health Curriculum Guide. Grade K. Bulletin 1988, No. 48.
ERIC Educational Resources Information Center
Alabama State Dept. of Education, Montgomery.
This curriculum guide supplements the Alabama "Health Education Course of Study," which offers a comprehensive planned sequential curriculum for grades K-12. The largest section of the guide consists of classroom activities which are tied to specific student outcomes. A list of materials needed to carry out the activities is provided.…
A Model for Retraining/Training of Business and Industry Employees.
ERIC Educational Resources Information Center
Portland Community Coll., OR.
This model was developed to assist Oregon community colleges in making a planned response to the needs of business and industry for retraining/training of their employees. The model offers a streamlined process for making needs assessments in business and industry through sequential steps for converting needs data into instructional programs. The…
Speaker-dependent Multipitch Tracking Using Deep Neural Networks
2015-01-01
connections through time. Studies have shown that RNNs are good at modeling sequential data like handwriting [12] and speech [26]. We plan to explore RNNs in...Schmidhuber, and S. Fernández, “Unconstrained on-line handwriting recognition with recurrent neural networks,” in Proceedings of NIPS, 2008, pp. 577–584. [13
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-01
... following methods: A. http://www.regulations.gov . Follow the on-line instructions for submitting comments... the equipment is in good working order, if necessary as part of the inspection; (6) idling of a... off school property during queuing for the sequential discharge or pickup of students where the...
PROCEDURES FOR THE ESTABLISHMENT OF PUBLIC 2-YEAR COLLEGES.
ERIC Educational Resources Information Center
MORRISON, D.G.; WITHERSPOON, CLINETTE F.
A SURVEY OF EXISTING LEGISLATION, PLANS, AND PROCEDURES FOR ESTABLISHING JUNIOR COLLEGES LED TO THE PREPARATION OF A SET OF SIX SUGGESTED GUIDELINES--(1) NEED FOR ENABLING LEGISLATION, CRITERIA FOR ESTABLISHMENT, AND PROCEDURES FOR ESTABLISHMENT, (2) 15 SEQUENTIAL STEPS, NOT ALL OF WHICH MAY BE NEEDED IN EVERY SITUATION, (3) DEVELOPMENT BY THE…
26 CFR 1.401(a)(26)-5 - Employees who benefit under a plan.
Code of Federal Regulations, 2010 CFR
2010-04-01
... excess of such employee's benefit under one or more formulas in effect for prior years that are based... would have accrued a benefit if the offset or reduction portion of the benefit formula were disregarded... offset or reduction portion of the benefit formula were disregarded. (ii) Offset by sequential or...
Health Curriculum Guide. Grade 5. Bulletin 1988, No. 53.
ERIC Educational Resources Information Center
Alabama State Dept. of Education, Montgomery.
This curriculum guide supplements the Alabama "Health Education Course of Study," which offers a comprehensive planned sequential curriculum for grades K-12. The largest section of the guide consists of classroom activities which are tied to specific student outcomes. A list of materials needed to carry out the activities is provided.…
Foreign Languages Course of Study, Junior & Senior High Schools. Draft.
ERIC Educational Resources Information Center
Dade County Public Schools, Miami, FL. Div. of Elementary and Secondary Instruction.
The study guide outlining the modern foreign language courses for English speakers in Dade County's secondary schools establishes a uniform sequential program for instruction in French, German, Hebrew, Italian, and Spanish. Program expectancies are described for each level and type of course, to serve as a basis for planning appropriate…
Alabama Course of Study: Health Education. Bulletin 1988, No. 25.
ERIC Educational Resources Information Center
Alabama State Dept. of Education, Montgomery.
This comprehensive school health education program offers a planned sequential curriculum for grades K-12 based on student needs, current and emerging health concepts, and social issues. It integrates the physical, mental, emotional, and social dimensions of health. The 10 areas covered are: (1) consumer health; (2) dental health; (3) disease…
Landscape analysis software tools
Don Vandendriesche
2008-01-01
Recently, several new computer programs have been developed to assist in landscape analysis. The âSequential Processing Routine for Arraying Yieldsâ (SPRAY) program was designed to run a group of stands with particular treatment activities to produce vegetation yield profiles for forest planning. SPRAY uses existing Forest Vegetation Simulator (FVS) software coupled...
Neural Bases of Sequence Processing in Action and Language
ERIC Educational Resources Information Center
Carota, Francesca; Sirigu, Angela
2008-01-01
Real-time estimation of what we will do next is a crucial prerequisite of purposive behavior. During the planning of goal-oriented actions, for instance, the temporal and causal organization of upcoming subsequent moves needs to be predicted based on our knowledge of events. A forward computation of sequential structure is also essential for…
Spatial expression of Hox cluster genes in the ontogeny of a sea urchin
NASA Technical Reports Server (NTRS)
Arenas-Mena, C.; Cameron, A. R.; Davidson, E. H.
2000-01-01
The Hox cluster of the sea urchin Strongylocentrous purpuratus contains ten genes in a 500 kb span of the genome. Only two of these genes are expressed during embryogenesis, while all of eight genes tested are expressed during development of the adult body plan in the larval stage. We report the spatial expression during larval development of the five 'posterior' genes of the cluster: SpHox7, SpHox8, SpHox9/10, SpHox11/13a and SpHox11/13b. The five genes exhibit a dynamic, largely mesodermal program of expression. Only SpHox7 displays extensive expression within the pentameral rudiment itself. A spatially sequential and colinear arrangement of expression domains is found in the somatocoels, the paired posterior mesodermal structures that will become the adult perivisceral coeloms. No such sequential expression pattern is observed in endodermal, epidermal or neural tissues of either the larva or the presumptive juvenile sea urchin. The spatial expression patterns of the Hox genes illuminate the evolutionary process by which the pentameral echinoderm body plan emerged from a bilateral ancestor.
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
Technical Reports Prepared Under Contract N00014-76-C-0475.
1987-05-29
264 Approximations to Densities in Geometric H. Solomon 10/27/78 Probability M.A. Stephens 3. Technical Relort No. Title Author Date 265 Sequential ...Certain Multivariate S. Iyengar 8/12/82 Normal Probabilities 323 EDF Statistics for Testing for the Gamma M.A. Stephens 8/13/82 Distribution with...20-85 Nets 360 Random Sequential Coding By Hamming Distance Yoshiaki Itoh 07-11-85 Herbert Solomon 361 Transforming Censored Samples And Testing Fit
Amonette, James E.; Autrey, S. Thomas; Foster-Mills, Nancy S.
2006-02-14
Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. Particularly, a photoacoustic spectroscopy sample array vessel including a vessel body having multiple sample cells connected thereto is disclosed. At least one acoustic detector is acoustically positioned near the sample cells. Methods for analyzing the multiple samples in the sample array vessels using photoacoustic spectroscopy are provided.
Depicting surgical anatomy of the porta hepatis in living donor liver transplantation.
Kelly, Paul; Fung, Albert; Qu, Joy; Greig, Paul; Tait, Gordon; Jenkinson, Jodie; McGilvray, Ian; Agur, Anne
2017-01-01
Visualizing the complex anatomy of vascular and biliary structures of the liver on a case-by-case basis has been challenging. A living donor liver transplant (LDLT) right hepatectomy case, with focus on the porta hepatis, was used to demonstrate an innovative method to visualize anatomy with the purpose of refining preoperative planning and teaching of complex surgical procedures. The production of an animation-enhanced video consisted of many stages including the integration of pre-surgical planning; case-specific footage and 3D models of the liver and associated vasculature, reconstructed from contrast-enhanced CTs. Reconstructions of the biliary system were modeled from intraoperative cholangiograms. The distribution of the donor portal veins, hepatic arteries and bile ducts was defined from the porta hepatis intrahepatically to the point of surgical division. Each step of the surgery was enhanced with 3D animation to provide sequential and seamless visualization from pre-surgical planning to outcome. Use of visualization techniques such as transparency and overlays allows viewers not only to see the operative field, but also the origin and course of segmental branches and their spatial relationships. This novel educational approach enables integrating case-based operative footage with advanced editing techniques for visualizing not only the surgical procedure, but also complex anatomy such as vascular and biliary structures. The surgical team has found this approach to be beneficial for preoperative planning and clinical teaching, especially for complex cases. Each animation-enhanced video case is posted to the open-access Toronto Video Atlas of Surgery (TVASurg), an education resource with a global clinical and patient user base. The novel educational system described in this paper enables integrating operative footage with 3D animation and cinematic editing techniques for seamless sequential organization from pre-surgical planning to outcome.
Depicting surgical anatomy of the porta hepatis in living donor liver transplantation
Fung, Albert; Qu, Joy; Greig, Paul; Tait, Gordon; Jenkinson, Jodie; McGilvray, Ian; Agur, Anne
2017-01-01
Visualizing the complex anatomy of vascular and biliary structures of the liver on a case-by-case basis has been challenging. A living donor liver transplant (LDLT) right hepatectomy case, with focus on the porta hepatis, was used to demonstrate an innovative method to visualize anatomy with the purpose of refining preoperative planning and teaching of complex surgical procedures. The production of an animation-enhanced video consisted of many stages including the integration of pre-surgical planning; case-specific footage and 3D models of the liver and associated vasculature, reconstructed from contrast-enhanced CTs. Reconstructions of the biliary system were modeled from intraoperative cholangiograms. The distribution of the donor portal veins, hepatic arteries and bile ducts was defined from the porta hepatis intrahepatically to the point of surgical division. Each step of the surgery was enhanced with 3D animation to provide sequential and seamless visualization from pre-surgical planning to outcome. Use of visualization techniques such as transparency and overlays allows viewers not only to see the operative field, but also the origin and course of segmental branches and their spatial relationships. This novel educational approach enables integrating case-based operative footage with advanced editing techniques for visualizing not only the surgical procedure, but also complex anatomy such as vascular and biliary structures. The surgical team has found this approach to be beneficial for preoperative planning and clinical teaching, especially for complex cases. Each animation-enhanced video case is posted to the open-access Toronto Video Atlas of Surgery (TVASurg), an education resource with a global clinical and patient user base. The novel educational system described in this paper enables integrating operative footage with 3D animation and cinematic editing techniques for seamless sequential organization from pre-surgical planning to outcome. PMID:29078606
Mketo, Nomvano; Nomngongo, Philiswa N; Ngila, J Catherine
2018-05-15
A rapid three-step sequential extraction method was developed under microwave radiation followed by inductively coupled plasma-optical emission spectroscopic (ICP-OES) and ion-chromatographic (IC) analysis for the determination of sulphur forms in coal samples. The experimental conditions of the proposed microwave-assisted sequential extraction (MW-ASE) procedure were optimized by using multivariate mathematical tools. Pareto charts generated from 2 3 full factorial design showed that, extraction time has insignificant effect on the extraction of sulphur species, therefore, all the sequential extraction steps were performed for 5 min. The optimum values according to the central composite designs and counter plots of the response surface methodology were 200 °C (microwave temperature) and 0.1 g (coal amount) for all the investigated extracting reagents (H 2 O, HCl and HNO 3 ). When the optimum conditions of the proposed MW-ASE procedure were applied in coal CRMs, SARM 18 showed more organic sulphur (72%) and the other two coal CRMs (SARMs 19 and 20) were dominated by sulphide sulphur species (52-58%). The sum of the sulphur forms from the sequential extraction steps have shown consistent agreement (95-96%) with certified total sulphur values on the coal CRM certificates. This correlation, in addition to the good precision (1.7%) achieved by the proposed procedure, suggests that the sequential extraction method is reliable, accurate and reproducible. To safe-guard the destruction of pyritic and organic sulphur forms in extraction step 1, water was used instead of HCl. Additionally, the notorious acidic mixture (HCl/HNO 3 /HF) was replaced by greener reagent (H 2 O 2 ) in the last extraction step. Therefore, the proposed MW-ASE method can be applied in routine laboratories for the determination of sulphur forms in coal and coal related matrices. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Sandhu, Amit
A sequential quadratic programming method is proposed for solving nonlinear optimal control problems subject to general path constraints including mixed state-control and state only constraints. The proposed algorithm further develops on the approach proposed in [1] with objective to eliminate the use of a high number of time intervals for arriving at an optimal solution. This is done by introducing an adaptive time discretization to allow formation of a desirable control profile without utilizing a lot of intervals. The use of fewer time intervals reduces the computation time considerably. This algorithm is further used in this thesis to solve a trajectory planning problem for higher elevation Mars landing.
Dark sequential Z ' portal: Collider and direct detection experiments
NASA Astrophysics Data System (ADS)
Arcadi, Giorgio; Campos, Miguel D.; Lindner, Manfred; Masiero, Antonio; Queiroz, Farinaldo S.
2018-02-01
We revisit the status of a Majorana fermion as a dark matter candidate when a sequential Z' gauge boson dictates the dark matter phenomenology. Direct dark matter detection signatures rise from dark matter-nucleus scatterings at bubble chamber and liquid xenon detectors, and from the flux of neutrinos from the Sun measured by the IceCube experiment, which is governed by the spin-dependent dark matter-nucleus scattering. On the collider side, LHC searches for dilepton and monojet + missing energy signals play an important role. The relic density and perturbativity requirements are also addressed. By exploiting the dark matter complementarity we outline the region of parameter space where one can successfully have a Majorana dark matter particle in light of current and planned experimental sensitivities.
Dynamic resource allocation in conservation planning
Golovin, D.; Krause, A.; Gardner, B.; Converse, S.J.; Morey, S.
2011-01-01
Consider the problem of protecting endangered species by selecting patches of land to be used for conservation purposes. Typically, the availability of patches changes over time, and recommendations must be made dynamically. This is a challenging prototypical example of a sequential optimization problem under uncertainty in computational sustainability. Existing techniques do not scale to problems of realistic size. In this paper, we develop an efficient algorithm for adaptively making recommendations for dynamic conservation planning, and prove that it obtains near-optimal performance. We further evaluate our approach on a detailed reserve design case study of conservation planning for three rare species in the Pacific Northwest of the United States. Copyright ?? 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.
Veiga, Helena Perrut; Bianchini, Esther Mandelbaum Gonçalves
2012-01-01
To perform an integrative review of studies on liquid sequential swallowing, by characterizing the methodology of the studies and the most important findings in young and elderly adults. Review of the literature written in English and Portuguese on PubMed, LILACS, SciELO and MEDLINE databases, within the past twenty years, available fully, using the following uniterms: sequential swallowing, swallowing, dysphagia, cup, straw, in various combinations. Research articles with a methodological approach on the characterization of liquid sequential swallowing by young and/or elderly adults, regardless of health condition, excluding studies involving only the esophageal phase. The following research indicators were applied: objectives, number and gender of participants; age group; amount of liquid offered; intake instruction; utensil used, methods and main findings. 18 studies met the established criteria. The articles were categorized according to the sample characterization and the methodology on volume intake, utensil used and types of exams. Most studies investigated only healthy individuals, with no swallowing complaints. Subjects were given different instructions as to the intake of all the volume: usual manner, continually, as rapidly as possible. The findings about the characterization of sequential swallowing were varied and described in accordance with the objectives of each study. It found great variability in the methodology employed to characterize the sequential swallowing. Some findings are not comparable, and sequential swallowing is not studied in most swallowing protocols, without consensus on the influence of the utensil.
Terrill, Thomas H; Wolfe, Richard M; Muir, James P
2010-12-01
Browse species containing condensed tannins (CTs) are an important source of nutrition for grazing/browsing livestock and wildlife in many parts of the world, but information on fiber concentration and CT-fiber interactions for these plants is lacking. Ten forage or browse species with a range of CT concentrations were oven dried and freeze dried and then analyzed for ash-corrected neutral detergent fiber (NDFom) and corrected acid detergent fiber (ADFom) using separate samples (ADFSEP) and sequential NDF-ADF analysis (ADFSEQ) with the ANKOM™ fiber analysis system. The ADFSEP and ADFSEQ residues were then analyzed for nitrogen (N) concentration. Oven drying increased (P < 0.05) fiber concentrations with some species, but not with others. For high-CT forage and browse species, ADFSEP concentrations were greater (P < 0.05) than NDFom values and approximately double the ADFSEQ values. Nitrogen concentration was greater (P < 0.05) in ADFSEP than ADFSEQ residues, likely due to precipitation with CTs. Sequential NDF-ADF analysis gave more realistic values and appeared to remove most of the fiber residue contaminants in CT forage samples. Freeze drying samples with sequential NDF-ADF analysis is recommended in the ANKOM™ fiber analysis system with CT-containing forage and browse species. Copyright © 2010 Society of Chemical Industry.
Sample extraction is one of the most important steps in arsenic speciation analysis of solid dietary samples. One of the problem areas in this analysis is the partial extraction of arsenicals from seafood samples. The partial extraction allows the toxicity of the extracted arse...
Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure
Salehi, M.; Smith, D.R.
2005-01-01
Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.
Manganese speciation of laboratory-generated welding fumes
Andrews, Ronnee N.; Keane, Michael; Hanley, Kevin W.; Feng, H. Amy; Ashley, Kevin
2015-01-01
The objective of this laboratory study was to identify and measure manganese (Mn) fractions in chamber-generated welding fumes (WF) and to evaluate and compare the results from a sequential extraction procedure for Mn fractions with that of an acid digestion procedure for measurement of total, elemental Mn. To prepare Mn-containing particulate matter from representative welding processes, a welding system was operated in short circuit gas metal arc welding (GMAW) mode using both stainless steel (SS) and mild carbon steel (MCS) and also with flux cored arc welding (FCAW) and shielded metal arc welding (SMAW) using MCS. Generated WF samples were collected onto polycarbonate filters before homogenization, weighing and storage in scintillation vials. The extraction procedure consisted of four sequential steps to measure various Mn fractions based upon selective solubility: (1) soluble Mn dissolved in 0.01 M ammonium acetate; (2) Mn (0,II) dissolved in 25 % (v/v) acetic acid; (3) Mn (III,IV) dissolved in 0.5% (w/v) hydroxylamine hydrochloride in 25% (v/v) acetic acid; and (4) insoluble Mn extracted with concentrated hydrochloric and nitric acids. After sample treatment, the four fractions were analyzed for Mn by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). WF from GMAW and FCAW showed similar distributions of Mn species, with the largest concentrations of Mn detected in the Mn (0,II) and insoluble Mn fractions. On the other hand, the majority of the Mn content of SMAW fume was detected as Mn (III,IV). Although the concentration of Mn measured from summation of the four sequential steps was statistically significantly different from that measured from the hot block dissolution method for total Mn, the difference is small enough to be of no practical importance for industrial hygiene air samples, and either method may be used for Mn measurement. The sequential extraction method provides valuable information about the oxidation state of Mn in samples and allows for comparison to results from previous work and from total Mn dissolution methods. PMID:26345630
Manganese speciation of laboratory-generated welding fumes.
Andrews, Ronnee N; Keane, Michael; Hanley, Kevin W; Feng, H Amy; Ashley, Kevin
The objective of this laboratory study was to identify and measure manganese (Mn) fractions in chamber-generated welding fumes (WF) and to evaluate and compare the results from a sequential extraction procedure for Mn fractions with that of an acid digestion procedure for measurement of total, elemental Mn. To prepare Mn-containing particulate matter from representative welding processes, a welding system was operated in short circuit gas metal arc welding (GMAW) mode using both stainless steel (SS) and mild carbon steel (MCS) and also with flux cored arc welding (FCAW) and shielded metal arc welding (SMAW) using MCS. Generated WF samples were collected onto polycarbonate filters before homogenization, weighing and storage in scintillation vials. The extraction procedure consisted of four sequential steps to measure various Mn fractions based upon selective solubility: (1) soluble Mn dissolved in 0.01 M ammonium acetate; (2) Mn (0,II) dissolved in 25 % (v/v) acetic acid; (3) Mn (III,IV) dissolved in 0.5% (w/v) hydroxylamine hydrochloride in 25% (v/v) acetic acid; and (4) insoluble Mn extracted with concentrated hydrochloric and nitric acids. After sample treatment, the four fractions were analyzed for Mn by inductively coupled plasma-atomic emission spectroscopy (ICP-AES). WF from GMAW and FCAW showed similar distributions of Mn species, with the largest concentrations of Mn detected in the Mn (0,II) and insoluble Mn fractions. On the other hand, the majority of the Mn content of SMAW fume was detected as Mn (III,IV). Although the concentration of Mn measured from summation of the four sequential steps was statistically significantly different from that measured from the hot block dissolution method for total Mn, the difference is small enough to be of no practical importance for industrial hygiene air samples, and either method may be used for Mn measurement. The sequential extraction method provides valuable information about the oxidation state of Mn in samples and allows for comparison to results from previous work and from total Mn dissolution methods.
Does solar radiation affect the growth of tomato seeds relative to their environment?
NASA Technical Reports Server (NTRS)
Holzer, Kristi
1995-01-01
The purpose of this experiment is to sequentially study and analyze the data collected from the germination and growth of irradiated Rutgers Supreme tomato seeds to adult producing plants. This experiment will not use irradiated seeds as a control as I plan to note growth in artificial verses natural environment as the basic experiment.
26 CFR 1.401(a)(26)-5 - Employees who benefit under a plan.
Code of Federal Regulations, 2013 CFR
2013-04-01
... portion, a benefit in excess of such employee's benefit under one or more formulas in effect for prior... employee would have accrued a benefit if the offset or reduction portion of the benefit formula were... if the offset or reduction portion of the benefit formula were disregarded. (ii) Offset by sequential...
DSN telemetry system performance with convolutionally code data
NASA Technical Reports Server (NTRS)
Mulhall, B. D. L.; Benjauthrit, B.; Greenhall, C. A.; Kuma, D. M.; Lam, J. K.; Wong, J. S.; Urech, J.; Vit, L. D.
1975-01-01
The results obtained to date and the plans for future experiments for the DSN telemetry system were presented. The performance of the DSN telemetry system in decoding convolutionally coded data by both sequential and maximum likelihood techniques is being determined by testing at various deep space stations. The evaluation of performance models is also an objective of this activity.
Amorim, Fábio A C; Ferreira, Sérgio L C
2005-02-28
In the present paper, a simultaneous pre-concentration procedure for the sequential determination of cadmium and lead in table salt samples using flame atomic absorption spectrometry is proposed. This method is based on the liquid-liquid extraction of cadmium(II) and lead(II) ions as dithizone complexes and direct aspiration of the organic phase for the spectrometer. The sequential determination of cadmium and lead is possible using a computer program. The optimization step was performed by a two-level fractional factorial design involving the variables: pH, dithizone mass, shaking time after addition of dithizone and shaking time after addition of solvent. In the studied levels these variables are not significant. The experimental conditions established propose a sample volume of 250mL and the extraction process using 4.0mL of methyl isobutyl ketone. This way, the procedure allows determination of cadmium and lead in table salt samples with a pre-concentration factor higher than 80, and detection limits of 0.3ngg(-1) for cadmium and 4.2ngg(-1) for lead. The precision expressed as relative standard deviation (n = 10) were 5.6 and 2.6% for cadmium concentration of 2 and 20ngg(-1), respectively, and of 3.2 and 1.1% for lead concentration of 20 and 200ngg(-1), respectively. Recoveries of cadmium and lead in several samples, measured by standard addition technique, proved also that this procedure is not affected by the matrix and can be applied satisfactorily for the determination of cadmium and lead in saline samples. The method was applied for the evaluation of the concentration of cadmium and lead in table salt samples consumed in Salvador City, Bahia, Brazil.
Shakeri Yekta, Sepehr; Gustavsson, Jenny; Svensson, Bo H; Skyllberg, Ulf
2012-01-30
The effect of sequential extraction of trace metals on sulfur (S) speciation in anoxic sludge samples from two lab-scale biogas reactors augmented with Fe was investigated. Analyses of sulfur K-edge X-ray absorption near edge structure (S XANES) spectroscopy and acid volatile sulfide (AVS) were conducted on the residues from each step of the sequential extraction. The S speciation in sludge samples after AVS analysis was also determined by S XANES. Sulfur was mainly present as FeS (≈ 60% of total S) and reduced organic S (≈ 30% of total S), such as organic sulfide and thiol groups, in the anoxic solid phase. Sulfur XANES and AVS analyses showed that during first step of the extraction procedure (the removal of exchangeable cations), a part of the FeS fraction corresponding to 20% of total S was transformed to zero-valent S, whereas Fe was not released into the solution during this transformation. After the last extraction step (organic/sulfide fraction) a secondary Fe phase was formed. The change in chemical speciation of S and Fe occurring during sequential extraction procedure suggests indirect effects on trace metals associated to the FeS fraction that may lead to incorrect results. Furthermore, by S XANES it was verified that the AVS analysis effectively removed the FeS fraction. The present results identified critical limitations for the application of sequential extraction for trace metal speciation analysis outside the framework for which the methods were developed. Copyright © 2011 Elsevier B.V. All rights reserved.
A comparison of sequential and spiral scanning techniques in brain CT.
Pace, Ivana; Zarb, Francis
2015-01-01
To evaluate and compare image quality and radiation dose of sequential computed tomography (CT) examinations of the brain and spiral CT examinations of the brain imaged on a GE HiSpeed NX/I Dual Slice 2CT scanner. A random sample of 40 patients referred for CT examination of the brain was selected and divided into 2 groups. Half of the patients were scanned using the sequential technique; the other half were scanned using the spiral technique. Radiation dose data—both the computed tomography dose index (CTDI) and the dose length product (DLP)—were recorded on a checklist at the end of each examination. Using the European Guidelines on Quality Criteria for Computed Tomography, 4 radiologists conducted a visual grading analysis and rated the level of visibility of 6 anatomical structures considered necessary to produce images of high quality. The mean CTDI(vol) and DLP values were statistically significantly higher (P <.05) with the sequential scans (CTDI(vol): 22.06 mGy; DLP: 304.60 mGy • cm) than with the spiral scans (CTDI(vol): 14.94 mGy; DLP: 229.10 mGy • cm). The mean image quality rating scores for all criteria of the sequential scanning technique were statistically significantly higher (P <.05) in the visual grading analysis than those of the spiral scanning technique. In this local study, the sequential technique was preferred over the spiral technique for both overall image quality and differentiation between gray and white matter in brain CT scans. Other similar studies counter this finding. The radiation dose seen with the sequential CT scanning technique was significantly higher than that seen with the spiral CT scanning technique. However, image quality with the sequential technique was statistically significantly superior (P <.05).
NASA Astrophysics Data System (ADS)
Nystrøm, G. M.; Ottosen, L. M.; Villumsen, A.
2003-05-01
In this work sequential extraction is performed with harbour sediment in order to evaluate the electrodialytic remediation potential for harbour sediments. Sequential extraction was performed on a sample of Norwegian harbour sediment; with the original sediment and after the sediment was treated with acid. The results from the sequential extraction show that 75% Zn and Pb and about 50% Cu are found in the most mobile phases in the original sediment and more than 90% Zn and Pb and 75% Cu are found in the most mobile phase in the sediment treated with acid. Electrodialytic remediation experiments were made. The method uses a low direct current as cleaning agent, removing the heavy metals towards the anode and cathode according to the charge of the heavy metals in the electric field. The electrodialytic experiments show that up to 50% Cu, 85% Zn and 60% Pb can be removed after 20 days. Thus, there is still a potential for a higher removal, with some changes in the experimental set-up and longer remediation time. The experiments show that thc use of sequential extraction can be used to predict the electrodialytic remediation potential for harbour sediments.
How, Jonathan; Minden, Mark D.; Brian, Leber; Chen, Eric X.; Brandwein, Joseph; Schuh, Andre C.; Schimmer, Aaron D.; Gupta, Vikas; Webster, Sheila; Degelder, Tammy; Haines, Patricia; Stayner, Lee-Anne; McGill, Shauna; Wang, Lisa; Piekarz, Richard; Wong, Tracy; Siu, Lillian L.; Espinoza-Delgado, Igor; Holleran, Julianne L.; Egorin, Merrill J.; Yee, Karen W. L.
2015-01-01
This phase I trial evaluated two schedules of escalating vorinostat in combination with decitabine every 28 days: (i) sequential or (ii) concurrent. There were three dose-limiting toxicities: grade 3 fatigue and generalized muscle weakness on the sequential schedule (n = 1) and grade 3 fatigue on the concurrent schedule (n = 2). The maximum tolerated dose was not reached on both planned schedules. The overall response rate (ORR) was 23% (three complete response [CR], two CR with incomplete incomplete blood count recovery [CRi], one partial response [PR] and two morphological leukemic free state [MLFS]). The ORR for all and previously untreated patients in the sequential arm was 13% (one CRi; one MLFS) and 0% compared to 30% (three CR; one CRi; one PR; one MLFS) and 36% in the concurrent arm (p = 0.26 for both), respectively. Decitabine plus vorinostat was safe and has clinical activity in patients with previously untreated acute myeloid leukemia. Responses appear higher with the concurrent dose schedule. Cumulative toxicities may limit long-term usage on the current dose/schedules. PMID:25682963
Intra-storm variability and soluble fractionation was explored for summer-time rain events in Steubenville, Ohio to evaluate the physical processes controlling mercury (Hg) in wet deposition in this industrialized region. Comprehensive precipitation sample collection was conducte...
NASA Astrophysics Data System (ADS)
Peña-Vázquez, E.; Barciela-Alonso, M. C.; Pita-Calvo, C.; Domínguez-González, R.; Bermejo-Barrera, P.
2015-09-01
The objective of this work is to develop a method for the determination of metals in saline matrices using high-resolution continuum source flame atomic absorption spectrometry (HR-CS FAAS). Module SFS 6 for sample injection was used in the manual mode, and flame operating conditions were selected. The main absorption lines were used for all the elements, and the number of selected analytical pixels were 5 (CP±2) for Cd, Cu, Fe, Ni, Pb and Zn, and 3 pixels for Mn (CP±1). Samples were acidified (0.5% (v/v) nitric acid), and the standard addition method was used for the sequential determination of the analytes in diluted samples (1:2). The method showed good precision (RSD(%) < 4%, except for Pb (6.5%)) and good recoveries. Accuracy was checked after the analysis of an SPS-WW2 wastewater reference material diluted with synthetic seawater (dilution 1:2), showing a good agreement between certified and experimental results.
Dong, Yuwen; Deshpande, Sunil; Rivera, Daniel E; Downs, Danielle S; Savage, Jennifer S
2014-06-01
Control engineering offers a systematic and efficient method to optimize the effectiveness of individually tailored treatment and prevention policies known as adaptive or "just-in-time" behavioral interventions. The nature of these interventions requires assigning dosages at categorical levels, which has been addressed in prior work using Mixed Logical Dynamical (MLD)-based hybrid model predictive control (HMPC) schemes. However, certain requirements of adaptive behavioral interventions that involve sequential decision making have not been comprehensively explored in the literature. This paper presents an extension of the traditional MLD framework for HMPC by representing the requirements of sequential decision policies as mixed-integer linear constraints. This is accomplished with user-specified dosage sequence tables, manipulation of one input at a time, and a switching time strategy for assigning dosages at time intervals less frequent than the measurement sampling interval. A model developed for a gestational weight gain (GWG) intervention is used to illustrate the generation of these sequential decision policies and their effectiveness for implementing adaptive behavioral interventions involving multiple components.
Structural characterization of polysaccharides from bamboo
NASA Astrophysics Data System (ADS)
Kamil, Ruzaimah Nik Mohamad; Yusuf, Nur'aini Raman; Yunus, Normawati M.; Yusup, Suzana
2014-10-01
The alkaline and water soluble polysaccharides were isolate by sequential extractions with distilled water, 60% ethanol containing 1%, 5% and 8% NaOH. The samples were prepared at 60 °C for 3 h from local bamboo. The functional group of the sample were examined using FTIR analysis. The most precipitate obtained is from using 60% ethanol containing 8% NaOH with yield of 2.6%. The former 3 residues isolated by sequential extractions with distilled water, 60% ethanol containing 1% and 5% NaOH are barely visible after filtering with cellulose filter paper. The FTIR result showed that the water-soluble polysaccharides consisted mainly of OH group, C
Liu, Xiaoxia; Tian, Miaomiao; Camara, Mohamed Amara; Guo, Liping; Yang, Li
2015-10-01
We present sequential CE analysis of amino acids and L-asparaginase-catalyzed enzyme reaction, by combing the on-line derivatization, optically gated (OG) injection and commercial-available UV-Vis detection. Various experimental conditions for sequential OG-UV/vis CE analysis were investigated and optimized by analyzing a standard mixture of amino acids. High reproducibility of the sequential CE analysis was demonstrated with RSD values (n = 20) of 2.23, 2.57, and 0.70% for peak heights, peak areas, and migration times, respectively, and the LOD of 5.0 μM (for asparagine) and 2.0 μM (for aspartic acid) were obtained. With the application of the OG-UV/vis CE analysis, sequential online CE enzyme assay of L-asparaginase-catalyzed enzyme reaction was carried out by automatically and continuously monitoring the substrate consumption and the product formation every 12 s from the beginning to the end of the reaction. The Michaelis constants for the reaction were obtained and were found to be in good agreement with the results of traditional off-line enzyme assays. The study demonstrated the feasibility and reliability of integrating the OG injection with UV/vis detection for sequential online CE analysis, which could be of potential value for online monitoring various chemical reaction and bioprocesses. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity
Gordiz, Kiarash; Singh, David J.; Henry, Asegun
2015-01-29
In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less
Some sequential, distribution-free pattern classification procedures with applications
NASA Technical Reports Server (NTRS)
Poage, J. L.
1971-01-01
Some sequential, distribution-free pattern classification techniques are presented. The decision problem to which the proposed classification methods are applied is that of discriminating between two kinds of electroencephalogram responses recorded from a human subject: spontaneous EEG and EEG driven by a stroboscopic light stimulus at the alpha frequency. The classification procedures proposed make use of the theory of order statistics. Estimates of the probabilities of misclassification are given. The procedures were tested on Gaussian samples and the EEG responses.
Simplified pupal surveys of Aedes aegypti (L.) for entomologic surveillance and dengue control.
Barrera, Roberto
2009-07-01
Pupal surveys of Aedes aegypti (L.) are useful indicators of risk for dengue transmission, although sample sizes for reliable estimations can be large. This study explores two methods for making pupal surveys more practical yet reliable and used data from 10 pupal surveys conducted in Puerto Rico during 2004-2008. The number of pupae per person for each sampling followed a negative binomial distribution, thus showing aggregation. One method found a common aggregation parameter (k) for the negative binomial distribution, a finding that enabled the application of a sequential sampling method requiring few samples to determine whether the number of pupae/person was above a vector density threshold for dengue transmission. A second approach used the finding that the mean number of pupae/person is correlated with the proportion of pupa-infested households and calculated equivalent threshold proportions of pupa-positive households. A sequential sampling program was also developed for this method to determine whether observed proportions of infested households were above threshold levels. These methods can be used to validate entomological thresholds for dengue transmission.
Kawai, Akira; Umeda, Toru; Wada, Takuro; Ihara, Koichiro; Isu, Kazuo; Abe, Satoshi; Ishii, Takeshi; Sugiura, Hideshi; Araki, Nobuhito; Ozaki, Toshifumi; Yabe, Hiroo; Hasegawa, Tadashi; Tsugane, Shoichiro; Beppu, Yasuo
2005-05-01
Doxorubicin and ifosfamide are the two most active agents used to treat soft tissue sarcomas. However, because of their overlapping side effects, concurrent administration to achieve optimal doses of each agent is difficult. We therefore conducted a Phase II trial to investigate the efficacy and feasibility of a novel alternating sequential chemotherapy regimen consisting of high dose ifosfamide and doxorubicin/cyclophosphamide in advanced adult non-small round cell soft tissue sarcomas. Adult patients with non-small round cell soft tissue sarcomas were enrolled. The treatment consisted of four sequential courses of chemotherapy that was planned for every 3 weeks. Cycles 1 and 3 consisted of ifosfamide (14 g/m(2)), and cycles 2 and 4 consisted of doxorubicin (60 mg/m(2)) and cyclophosphamide (1200 mg/m(2)). Forty-two patients (median age 47 years) were enrolled. Of the 36 assessable patients, 1 complete response and 16 partial responses were observed, for a response rate of 47.2%. Responses were observed in 57% of patients who had received no previous chemotherapy and 13% of those who had previously undergone chemotherapy. Grade 3-4 neutropenia was observed during 70% of all cycles. Sequential administration of high-dose ifosfamide and doxorubicin/cyclophosphamide has promising activity with manageable side effects in patients with advanced adult non-small round cell soft tissue sarcomas.
Parallelization of sequential Gaussian, indicator and direct simulation algorithms
NASA Astrophysics Data System (ADS)
Nunes, Ruben; Almeida, José A.
2010-08-01
Improving the performance and robustness of algorithms on new high-performance parallel computing architectures is a key issue in efficiently performing 2D and 3D studies with large amount of data. In geostatistics, sequential simulation algorithms are good candidates for parallelization. When compared with other computational applications in geosciences (such as fluid flow simulators), sequential simulation software is not extremely computationally intensive, but parallelization can make it more efficient and creates alternatives for its integration in inverse modelling approaches. This paper describes the implementation and benchmarking of a parallel version of the three classic sequential simulation algorithms: direct sequential simulation (DSS), sequential indicator simulation (SIS) and sequential Gaussian simulation (SGS). For this purpose, the source used was GSLIB, but the entire code was extensively modified to take into account the parallelization approach and was also rewritten in the C programming language. The paper also explains in detail the parallelization strategy and the main modifications. Regarding the integration of secondary information, the DSS algorithm is able to perform simple kriging with local means, kriging with an external drift and collocated cokriging with both local and global correlations. SIS includes a local correction of probabilities. Finally, a brief comparison is presented of simulation results using one, two and four processors. All performance tests were carried out on 2D soil data samples. The source code is completely open source and easy to read. It should be noted that the code is only fully compatible with Microsoft Visual C and should be adapted for other systems/compilers.
Towards efficient multi-scale methods for monitoring sugarcane aphid infestations in sorghum
USDA-ARS?s Scientific Manuscript database
We discuss approaches and issues involved with developing optimal monitoring methods for sugarcane aphid infestations (SCA) in grain sorghum. We discuss development of sequential sampling methods that allow for estimation of the number of aphids per sample unit, and statistical decision making rela...
Precipitation as a chemical and meteorological phenomenon
Francis J. Berlandi; Donald G. Muldoon; Harvey S. Rosenblum; Lloyd L. Schulman
1976-01-01
Sequential rain and snow sampling has been performed at Burlington and Concord, Massachusetts. The samples have been collected during 1974 and 1975 in one-quarter inch and one inch rain equivalents and chemical analysis performed on the aliquotes. Meteorological data was documented at the time of collection.
Radio Frequency Ablation Registration, Segmentation, and Fusion Tool
McCreedy, Evan S.; Cheng, Ruida; Hemler, Paul F.; Viswanathan, Anand; Wood, Bradford J.; McAuliffe, Matthew J.
2008-01-01
The Radio Frequency Ablation Segmentation Tool (RFAST) is a software application developed using NIH's Medical Image Processing Analysis and Visualization (MIPAV) API for the specific purpose of assisting physicians in the planning of radio frequency ablation (RFA) procedures. The RFAST application sequentially leads the physician through the steps necessary to register, fuse, segment, visualize and plan the RFA treatment. Three-dimensional volume visualization of the CT dataset with segmented 3D surface models enables the physician to interactively position the ablation probe to simulate burns and to semi-manually simulate sphere packing in an attempt to optimize probe placement. PMID:16871716
Statistical process control: a practical application for hospitals.
VanderVeen, L M
1992-01-01
A six-step plan based on using statistics was designed to improve quality in the central processing and distribution department of a 223-bed hospital in Oakland, CA. This article describes how the plan was implemented sequentially, starting with the crucial first step of obtaining administrative support. The QI project succeeded in overcoming beginners' fear of statistics and in training both managers and staff to use inspection checklists, Pareto charts, cause-and-effect diagrams, and control charts. The best outcome of the program was the increased commitment to quality improvement by the members of the department.
Weibel, Daniel; Schelling, Esther; Bonfoh, Bassirou; Utzinger, Jürg; Hattendorf, Jan; Abdoulaye, Mahamat; Madjiade, Toguina; Zinsstag, Jakob
2008-11-01
There is a pressing need for baseline demographic and health-related data to plan, implement and evaluate health interventions in developing countries, and to monitor progress towards international development goals. However, mobile pastoralists, i.e. people who depend on a livestock production system and follow their herds as they move, remain marginalized from rural development plans and interventions. The fact that mobile people are hard to reach and stay in contact with is a plausible reason why they are underrepresented in national censuses and/or alternative sequential sample survey systems. We present a proof-of-concept of monitoring highly mobile, pastoral people by recording demographic and health-related data from 933 women and 2020 children and establishing a biometric identification system (BIS) based on the registration and identification of digital fingerprints. Although only 22 women, representing 2.4% of the total registered women, were encountered twice in the four survey rounds, the approach implemented is shown to be feasible. The BIS described here is linked to a geographical information system to facilitate the creation of the first health and demographic surveillance system in a mobile, pastoralist setting. Our ultimate goal is to implement and monitor interventions with the "one health" concept, thus integrating and improving human, animal and ecosystem health.
Adaptive graph-based multiple testing procedures
Klinglmueller, Florian; Posch, Martin; Koenig, Franz
2016-01-01
Multiple testing procedures defined by directed, weighted graphs have recently been proposed as an intuitive visual tool for constructing multiple testing strategies that reflect the often complex contextual relations between hypotheses in clinical trials. Many well-known sequentially rejective tests, such as (parallel) gatekeeping tests or hierarchical testing procedures are special cases of the graph based tests. We generalize these graph-based multiple testing procedures to adaptive trial designs with an interim analysis. These designs permit mid-trial design modifications based on unblinded interim data as well as external information, while providing strong family wise error rate control. To maintain the familywise error rate, it is not required to prespecify the adaption rule in detail. Because the adaptive test does not require knowledge of the multivariate distribution of test statistics, it is applicable in a wide range of scenarios including trials with multiple treatment comparisons, endpoints or subgroups, or combinations thereof. Examples of adaptations are dropping of treatment arms, selection of subpopulations, and sample size reassessment. If, in the interim analysis, it is decided to continue the trial as planned, the adaptive test reduces to the originally planned multiple testing procedure. Only if adaptations are actually implemented, an adjusted test needs to be applied. The procedure is illustrated with a case study and its operating characteristics are investigated by simulations. PMID:25319733
Fish Processed Production Planning Using Integer Stochastic Programming Model
NASA Astrophysics Data System (ADS)
Firmansyah, Mawengkang, Herman
2011-06-01
Fish and its processed products are the most affordable source of animal protein in the diet of most people in Indonesia. The goal in production planning is to meet customer demand over a fixed time horizon divided into planning periods by optimizing the trade-off between economic objectives such as production cost and customer satisfaction level. The major decisions are production and inventory levels for each product and the number of workforce in each planning period. In this paper we consider the management of small scale traditional business at North Sumatera Province which performs processing fish into several local seafood products. The inherent uncertainty of data (e.g. demand, fish availability), together with the sequential evolution of data over time leads the production planning problem to a nonlinear mixed-integer stochastic programming model. We use scenario generation based approach and feasible neighborhood search for solving the model. The results which show the amount of each fish processed product and the number of workforce needed in each horizon planning are presented.
ERIC Educational Resources Information Center
Murray, Laura Carolyn
2017-01-01
This dissertation study employs an exploratory sequential mixed methods design to investigate how emerging adults with psychiatric disabilities plan for and transition to and through college. Special attention is paid to how disclosure of disability status in educational contexts can influence both educational and recovery outcomes. Though more…
ERIC Educational Resources Information Center
Sadaf, Ayesha; Newby, Timothy J.; Ertmer, Peggy A.
2016-01-01
The purpose of the study was to investigate factors that predict preservice teachers' intentions and actual uses of Web 2.0 tools in their classrooms. A two-phase, mixed method, sequential explanatory design was used. The first phase explored factors, based on the decomposed theory of planned behavior, that predict preservice teachers' intentions…
ERIC Educational Resources Information Center
Viederman, Stephen
Population education is a planned, integrated, and sequential approach to population learning. It is defined as the process by which the student investigates the nature and meaning of population processes and characteristics, the causes of population changes, and the consequences of these for himself, his family, his society, and the world. Its…
A Sequential Approach to Implant-Supported Overdentures.
Kosinski, Timothy
2016-03-01
Fabrication of implant-supported maxillary or mandibular overdentures can seem to be difficult procedures. Many things could go wrong and/or unnoticed until the fabrication has been completed. Implants must be correctly surgically placed in viable bone at the proper angulation and spacing within an arch. The type of attachment must be considered and future treatment of the appliance should be simple and efficient. The appliance must function not only initially but also for many years to come. The author has found the use of the GPS attachment to be an ideal tool to achieve the goals of retention and stability. Careful planning is the most important part of this process, and understanding the benefits and risks of creating overdentures should be well understood by the dentists. By sequentially planning and treating these types of cases, the patient is able to function reasonably during the stages of implant healing. The final prosthesis is created and remaining teeth that held the transitional appliance in place are remove on the day of final seating. This is an excellent simplified retentive system option for those patients who are anxious about losing their teeth, even those teeth that are diseased and ugly.
NASA Astrophysics Data System (ADS)
Izzati, Munifatul; Haryanti, Sri; Parman, Sarjana
2018-05-01
Gracilaria widely known as a source of essential trace elements. However this red seaweeds also has great potential for being developed into commercial products. This study examined the sequential pattern of essential trace elements composition in fresh Gracilaria verrucosa and a selection of its generated products, nemely extracted agar, Gracilaria salt and Gracilaria residue. The sample was collected from a brackish water pond, located in north part Semarang, Central Java. The collected sample was then dried under the sun, and subsequently processed into aformentioned generated products. The Gracilaria salt was obtain by soaking the sun dried Gracilaria overnight in fresh water overnight. The resulted salt solution was then boiled leaving crystal salt. Extracted agar was obtained with alkali agar extraction method. The rest of remaining material was considered as Gracilaria residue. The entire process was repeated 3 times. The compositin of trace elements was examined using ICP-MS Spectrometry. Collected data was then analyzed by ANOVA single factor. Resulting sequential pattern of its essential trace elements composition was compared. A regular table salt was used as controls. Resuts from this study revealed that Gracilaria verrucosa and its all generated products all have similarly patterned the composition of essential trace elements, where Mn>Zn>Cu>Mo. Additionally this pattern is similar to different subspecies of Gracilaria from different location and and different season. However, Gracilaria salt has distinctly different pattern of sequential essential trace elements composition compared to table salt.
Patterns and Prevalence of Core Profile Types in the WPPSI Standardization Sample.
ERIC Educational Resources Information Center
Glutting, Joseph J.; McDermott, Paul A.
1990-01-01
Found most representative subtest profiles for 1,200 children comprising standardization sample of Wechsler Preschool and Primary Scale of Intelligence (WPPSI). Grouped scaled scores from WPPSI subtests according to similar level and shape using sequential minimum-variance cluster analysis with independent replications. Obtained final solution of…
Depression and Delinquency Covariation in an Accelerated Longitudinal Sample of Adolescents
ERIC Educational Resources Information Center
Kofler, Michael J.; McCart, Michael R.; Zajac, Kristyn; Ruggiero, Kenneth J.; Saunders, Benjamin E.; Kilpatrick, Dean G.
2011-01-01
Objectives: The current study tested opposing predictions stemming from the failure and acting out theories of depression-delinquency covariation. Method: Participants included a nationwide longitudinal sample of adolescents (N = 3,604) ages 12 to 17. Competing models were tested with cohort-sequential latent growth curve modeling to determine…
USDA-ARS?s Scientific Manuscript database
This research presents a sensitive and confirmatory multi-residue method for mequindox (MEQ), quinocetone (QCT), and their 11 metabolites in chicken and pork samples. After extracted with acetonitrile-ethyl acetate, acidulated, and extracted again with ethyl acetate sequentially, each sample was pu...
Thermogravimetric and differential thermal analysis of potassium bicarbonate contaminated cellulose
A. Broido
1966-01-01
When samples undergo a complicated set of simultaneous and sequential reactions, as cellulose does on heating, results of thermogravimetric and differential thermal analyses are difficult to interpret. Nevertheless, careful comparison of pure and contaminated samples, pyrolyzed under identical conditions, can yield useful information. In these experiments TGA and DTA...
ENVIRONMENTAL TECHNOLOGY VERIFICATION REPORT, PCB DETECTION TECHNOLOGY, HYBRIZYME DELFIA TM ASSAY
The DELFIA PCB Assay is a solid-phase time-resolved fluoroimmunoassay based on the sequential addition of sample extract and europium-labeled PCB tracer to a monoclonal antibody reagent specific for PCBs. In this assay, the antibody reagent and sample extract are added to a strip...
Astolfi, Maria Luisa; Di Filippo, Patrizia; Gentili, Alessandra; Canepari, Silvia
2017-11-01
We describe the optimization and validation of a sequential extractive method for the determination of the polycyclic aromatic hydrocarbons (PAHs) and elements (Al, As, Cd, Cr, Cu, Fe, Mn, Ni, Pb, Se, V and Zn) that are chemically fractionated into bio-accessible and mineralized residual fractions on a single particulate matter filter. The extraction is performed by automatic accelerated solvent extraction (ASE); samples are sequentially treated with dichloromethane/acetone (4:1) for PAHs extraction and acetate buffer (0.01M; pH 4.5) for elements extraction (bio-accessible fraction). The remaining solid sample is then collected and subjected to acid digestion with HNO 3 :H 2 O 2 (2:1) to determine the mineralized residual element fraction. We also describe a homemade ASE cell that reduces the blank values for most elements; in this cell, the steel frit was replaced by a Teflon pierced disk and a Teflon cylinder was used as the filler. The performance of the proposed method was evaluated in terms of recovery from standard reference material (SRM 1648 and SRM 1649a) and repeatability. The equivalence between the new ASE method and conventional methods was verified for PAHs and for bio-accessible and mineralized residual fractions of elements on PM 10 twin filters. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Scherer, Aaron M.; Windschitl, Paul D.; O'Rourke, Jillian; Smith, Andrew R.
2012-01-01
People must often engage in sequential sampling in order to make predictions about the relative quantities of two options. We investigated how directional motives influence sampling selections and resulting predictions in such cases. We used a paradigm in which participants had limited time to sample items and make predictions about which side of…
ERIC Educational Resources Information Center
Cizdziel, James V.
2011-01-01
In this laboratory experiment, students quantitatively determine the concentration of an element (mercury) in an environmental or biological sample while comparing and contrasting the fundamental techniques of atomic absorption spectrometry (AAS) and atomic fluorescence spectrometry (AFS). A mercury analyzer based on sample combustion,…
NASA Astrophysics Data System (ADS)
Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli
2018-01-01
Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.
Identifying High-Rate Flows Based on Sequential Sampling
NASA Astrophysics Data System (ADS)
Zhang, Yu; Fang, Binxing; Luo, Hao
We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.
Avery, Taliser R; Kulldorff, Martin; Vilk, Yury; Li, Lingling; Cheetham, T Craig; Dublin, Sascha; Davis, Robert L; Liu, Liyan; Herrinton, Lisa; Brown, Jeffrey S
2013-05-01
This study describes practical considerations for implementation of near real-time medical product safety surveillance in a distributed health data network. We conducted pilot active safety surveillance comparing generic divalproex sodium to historical branded product at four health plans from April to October 2009. Outcomes reported are all-cause emergency room visits and fractures. One retrospective data extract was completed (January 2002-June 2008), followed by seven prospective monthly extracts (January 2008-November 2009). To evaluate delays in claims processing, we used three analytic approaches: near real-time sequential analysis, sequential analysis with 1.5 month delay, and nonsequential (using final retrospective data). Sequential analyses used the maximized sequential probability ratio test. Procedural and logistical barriers to active surveillance were documented. We identified 6586 new users of generic divalproex sodium and 43,960 new users of the branded product. Quality control methods identified 16 extract errors, which were corrected. Near real-time extracts captured 87.5% of emergency room visits and 50.0% of fractures, which improved to 98.3% and 68.7% respectively with 1.5 month delay. We did not identify signals for either outcome regardless of extract timeframe, and slight differences in the test statistic and relative risk estimates were found. Near real-time sequential safety surveillance is feasible, but several barriers warrant attention. Data quality review of each data extract was necessary. Although signal detection was not affected by delay in analysis, when using a historical control group differential accrual between exposure and outcomes may theoretically bias near real-time risk estimates towards the null, causing failure to detect a signal. Copyright © 2013 John Wiley & Sons, Ltd.
Maurer, Willi; Jones, Byron; Chen, Ying
2018-05-10
In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one-sided test (TOST) procedure, which depends, among other things, on the estimated within-subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within-subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse-normal combination of p-values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first-stage results. This necessitates a discussion and exploitation of the peculiar properties of the power curve of the TOST testing procedure. We illustrate our approach with an example based on a real ABE study and compare the operating characteristics of our proposed method with those of Method B of Povin et al. Copyright © 2018 John Wiley & Sons, Ltd.
Chi, Chih-Lin; Zeng, Wenjun; Oh, Wonsuk; Borson, Soo; Lenskaia, Tatiana; Shen, Xinpeng; Tonellato, Peter J
2017-12-01
Prediction of onset and progression of cognitive decline and dementia is important both for understanding the underlying disease processes and for planning health care for populations at risk. Predictors identified in research studies are typically accessed at one point in time. In this manuscript, we argue that an accurate model for predicting cognitive status over relatively long periods requires inclusion of time-varying components that are sequentially assessed at multiple time points (e.g., in multiple follow-up visits). We developed a pilot model to test the feasibility of using either estimated or observed risk factors to predict cognitive status. We developed two models, the first using a sequential estimation of risk factors originally obtained from 8 years prior, then improved by optimization. This model can predict how cognition will change over relatively long time periods. The second model uses observed rather than estimated time-varying risk factors and, as expected, results in better prediction. This model can predict when newly observed data are acquired in a follow-up visit. Performances of both models that are evaluated in10-fold cross-validation and various patient subgroups show supporting evidence for these pilot models. Each model consists of multiple base prediction units (BPUs), which were trained using the same set of data. The difference in usage and function between the two models is the source of input data: either estimated or observed data. In the next step of model refinement, we plan to integrate the two types of data together to flexibly predict dementia status and changes over time, when some time-varying predictors are measured only once and others are measured repeatedly. Computationally, both data provide upper and lower bounds for predictive performance. Copyright © 2017 Elsevier Inc. All rights reserved.
MacIsaac, Rachael L; Khatri, Pooja; Bendszus, Martin; Bracard, Serge; Broderick, Joseph; Campbell, Bruce; Ciccone, Alfonso; Dávalos, Antoni; Davis, Stephen M; Demchuk, Andrew; Diener, Hans-Christoph; Dippel, Diederik; Donnan, Geoffrey A; Fiehler, Jens; Fiorella, David; Goyal, Mayank; Hacke, Werner; Hill, Michael D; Jahan, Reza; Jauch, Edward; Jovin, Tudor; Kidwell, Chelsea S; Liebeskind, David; Majoie, Charles B; Martins, Sheila Cristina Ouriques; Mitchell, Peter; Mocco, J; Muir, Keith W; Nogueira, Raul; Saver, Jeffrey L; Schonewille, Wouter J; Siddiqui, Adnan H; Thomalla, Götz; Tomsick, Thomas A; Turk, Aquilla S; White, Philip; Zaidat, Osama; Lees, Kennedy R
2015-10-01
Endovascular treatment has been shown to restore blood flow effectively. Second-generation medical devices such as stent retrievers are now showing overwhelming efficacy in clinical trials, particularly in conjunction with intravenous recombinant tissue plasminogen activator. This statistical analysis plan utilizing a novel, sequential approach describes a prospective, individual patient data analysis of endovascular therapy in conjunction with intravenous recombinant tissue plasminogen activator agreed upon by the Thrombectomy and Tissue Plasminogen Activator Collaborative Group. This protocol will specify the primary outcome for efficacy, as 'favorable' outcome defined by the ordinal distribution of the modified Rankin Scale measured at three-months poststroke, but with modified Rankin Scales 5 and 6 collapsed into a single category. The primary analysis will aim to answer the questions: 'what is the treatment effect of endovascular therapy with intravenous recombinant tissue plasminogen activator compared to intravenous tissue plasminogen activator alone on full scale modified Rankin Scale at 3 months?' and 'to what extent do key patient characteristics influence the treatment effect of endovascular therapy?'. Key secondary outcomes include effect of endovascular therapy on death within 90 days; analyses of modified Rankin Scale using dichotomized methods; and effects of endovascular therapy on symptomatic intracranial hemorrhage. Several secondary analyses will be considered as well as expanding patient cohorts to intravenous recombinant tissue plasminogen activator-ineligible patients, should data allow. This collaborative meta-analysis of individual participant data from randomized trials of endovascular therapy vs. control in conjunction with intravenous thrombolysis will demonstrate the efficacy and generalizability of endovascular therapy with intravenous thrombolysis as a concomitant medication. © 2015 World Stroke Organization.
Simultaneous sequential monitoring of efficacy and safety led to masking of effects.
van Eekelen, Rik; de Hoop, Esther; van der Tweel, Ingeborg
2016-08-01
Usually, sequential designs for clinical trials are applied on the primary (=efficacy) outcome. In practice, other outcomes (e.g., safety) will also be monitored and influence the decision whether to stop a trial early. Implications of simultaneous monitoring on trial decision making are yet unclear. This study examines what happens to the type I error, power, and required sample sizes when one efficacy outcome and one correlated safety outcome are monitored simultaneously using sequential designs. We conducted a simulation study in the framework of a two-arm parallel clinical trial. Interim analyses on two outcomes were performed independently and simultaneously on the same data sets using four sequential monitoring designs, including O'Brien-Fleming and Triangular Test boundaries. Simulations differed in values for correlations and true effect sizes. When an effect was present in both outcomes, competition was introduced, which decreased power (e.g., from 80% to 60%). Futility boundaries for the efficacy outcome reduced overall type I errors as well as power for the safety outcome. Monitoring two correlated outcomes, given that both are essential for early trial termination, leads to masking of true effects. Careful consideration of scenarios must be taken into account when designing sequential trials. Simulation results can help guide trial design. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Guo, Yiqing; Jia, Xiuping; Paull, David
2018-06-01
The explosive availability of remote sensing images has challenged supervised classification algorithms such as Support Vector Machines (SVM), as training samples tend to be highly limited due to the expensive and laborious task of ground truthing. The temporal correlation and spectral similarity between multitemporal images have opened up an opportunity to alleviate this problem. In this study, a SVM-based Sequential Classifier Training (SCT-SVM) approach is proposed for multitemporal remote sensing image classification. The approach leverages the classifiers of previous images to reduce the required number of training samples for the classifier training of an incoming image. For each incoming image, a rough classifier is firstly predicted based on the temporal trend of a set of previous classifiers. The predicted classifier is then fine-tuned into a more accurate position with current training samples. This approach can be applied progressively to sequential image data, with only a small number of training samples being required from each image. Experiments were conducted with Sentinel-2A multitemporal data over an agricultural area in Australia. Results showed that the proposed SCT-SVM achieved better classification accuracies compared with two state-of-the-art model transfer algorithms. When training data are insufficient, the overall classification accuracy of the incoming image was improved from 76.18% to 94.02% with the proposed SCT-SVM, compared with those obtained without the assistance from previous images. These results demonstrate that the leverage of a priori information from previous images can provide advantageous assistance for later images in multitemporal image classification.
Su, Cheng-Kuan; Tseng, Po-Jen; Chiu, Hsien-Ting; Del Vall, Andrea; Huang, Yu-Fen; Sun, Yuh-Chang
2017-03-01
Probing tumor extracellular metabolites is a vitally important issue in current cancer biology. In this study an analytical system was constructed for the in vivo monitoring of mouse tumor extracellular hydrogen peroxide (H 2 O 2 ), lactate, and glucose by means of microdialysis (MD) sampling and fluorescence determination in conjunction with a smart sequential enzymatic derivatization scheme-involving a loading sequence of fluorogenic reagent/horseradish peroxidase, microdialysate, lactate oxidase, pyruvate, and glucose oxidase-for step-by-step determination of sampled H 2 O 2 , lactate, and glucose in mouse tumor microdialysate. After optimization of the overall experimental parameters, the system's detection limit reached as low as 0.002 mM for H 2 O 2 , 0.058 mM for lactate, and 0.055 mM for glucose, based on 3 μL of microdialysate, suggesting great potential for determining tumor extracellular concentrations of lactate and glucose. Spike analyses of offline-collected mouse tumor microdialysate and monitoring of the basal concentrations of mouse tumor extracellular H 2 O 2 , lactate, and glucose, as well as those after imparting metabolic disturbance through intra-tumor administration of a glucose solution through a prior-implanted cannula, were conducted to demonstrate the system's applicability. Our results evidently indicate that hyphenation of an MD sampling device with an optimized sequential enzymatic derivatization scheme and a fluorescence spectrometer can be used successfully for multi-analyte monitoring of tumor extracellular metabolites in living animals. Copyright © 2016 Elsevier B.V. All rights reserved.
1984-09-01
and these accompanied the sample residue through sieving to avoid sample mix- up . B. Field data sheets required logger’s initials on each page to A...ensure data completeness. C. Metal trays were placed to catch residue spillage during residue transfer from sieves to sample bottles. D. Sample bottles...methodologies were comparable for all sample e types and consisted of four sequential components: extraction, clean- up , gas chromatographic (GC) analysis, and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarepisheh, M; Li, R; Xing, L
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) andmore » aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves quality of resultant treatment plans as compared with conventional VMAT or IMRT treatments.« less
Sequential analysis in neonatal research-systematic review.
Lava, Sebastiano A G; Elie, Valéry; Ha, Phuong Thi Viet; Jacqz-Aigrain, Evelyne
2018-05-01
As more new drugs are discovered, traditional designs come at their limits. Ten years after the adoption of the European Paediatric Regulation, we performed a systematic review on the US National Library of Medicine and Excerpta Medica database of sequential trials involving newborns. Out of 326 identified scientific reports, 21 trials were included. They enrolled 2832 patients, of whom 2099 were analyzed: the median number of neonates included per trial was 48 (IQR 22-87), median gestational age was 28.7 (IQR 27.9-30.9) weeks. Eighteen trials used sequential techniques to determine sample size, while 3 used continual reassessment methods for dose-finding. In 16 studies reporting sufficient data, the sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674) with respect to a traditional trial. When the number of neonates finally included in the analysis was considered, the difference became significant: 35 (57%) patients (IQR 10 to 136.5, p = 0.0033). Sequential trial designs have not been frequently used in Neonatology. They might potentially be able to reduce the number of patients in drug trials, although this is not always the case. What is known: • In evaluating rare diseases in fragile populations, traditional designs come at their limits. About 20% of pediatric trials are discontinued, mainly because of recruitment problems. What is new: • Sequential trials involving newborns were infrequently used and only a few (n = 21) are available for analysis. • The sequential design allowed to non-significantly reduce the number of enrolled neonates by a median of 24 (31%) patients (IQR - 4.75 to 136.5, p = 0.0674).
Movement of particles using sequentially activated dielectrophoretic particle trapping
Miles, Robin R.
2004-02-03
Manipulation of DNA and cells/spores using dielectrophoretic (DEP) forces to perform sample preparation protocols for polymerized chain reaction (PCR) based assays for various applications. This is accomplished by movement of particles using sequentially activated dielectrophoretic particle trapping. DEP forces induce a dipole in particles, and these particles can be trapped in non-uniform fields. The particles can be trapped in the high field strength region of one set of electrodes. By switching off this field and switching on an adjacent electrodes, particles can be moved down a channel with little or no flow.
O'Brien, Kathryn; Stanton, Naomi; Edwards, Adrian; Hood, Kerenza; Butler, Christopher C
2011-03-01
Due to the non-specific nature of symptoms of UTI in children and low levels of urine sampling, the prevalence of UTI amongst acutely ill children in primary care is unknown. To undertake an exploratory study of acutely ill children consulting in primary care, determine the feasibility of obtaining urine samples, and describe presenting symptoms and signs, and the proportion with UTI. Exploratory, observational study. Four general practices in South Wales. A total of 99 sequential attendees with acute illness aged less than five years. UTI defined by >10(5) organisms/ml on laboratory culture of urine. Urine samples were obtained in 75 (76%) children. Three (4%) met microbiological criteria for UTI. GPs indicated they would not normally have obtained urine samples in any of these three children. However, all had received antibiotics for suspected alternative infections. Urine sample collection is feasible from the majority of acutely ill children in primary care, including infants. Some cases of UTI may be missed if children thought to have an alternative site of infection are excluded from urine sampling. A larger study is needed to more accurately determine the prevalence of UTI in children consulting with acute illness in primary care, and to explore which symptoms and signs might help clinicians effectively target urine sampling.
Method and apparatus for telemetry adaptive bandwidth compression
NASA Technical Reports Server (NTRS)
Graham, Olin L.
1987-01-01
Methods and apparatus are provided for automatic and/or manual adaptive bandwidth compression of telemetry. An adaptive sampler samples a video signal from a scanning sensor and generates a sequence of sampled fields. Each field and range rate information from the sensor are hence sequentially transmitted to and stored in a multiple and adaptive field storage means. The field storage means then, in response to an automatic or manual control signal, transfers the stored sampled field signals to a video monitor in a form for sequential or simultaneous display of a desired number of stored signal fields. The sampling ratio of the adaptive sample, the relative proportion of available communication bandwidth allocated respectively to transmitted data and video information, and the number of fields simultaneously displayed are manually or automatically selectively adjustable in functional relationship to each other and detected range rate. In one embodiment, when relatively little or no scene motion is detected, the control signal maximizes sampling ratio and causes simultaneous display of all stored fields, thus maximizing resolution and bandwidth available for data transmission. When increased scene motion is detected, the control signal is adjusted accordingly to cause display of fewer fields. If greater resolution is desired, the control signal is adjusted to increase the sampling ratio.
NASA Technical Reports Server (NTRS)
Howard, R. A.; North, D. W.; Pezier, J. P.
1975-01-01
A new methodology is proposed for integrating planetary quarantine objectives into space exploration planning. This methodology is designed to remedy the major weaknesses inherent in the current formulation of planetary quarantine requirements. Application of the methodology is illustrated by a tutorial analysis of a proposed Jupiter Orbiter mission. The proposed methodology reformulates planetary quarantine planning as a sequential decision problem. Rather than concentrating on a nominal plan, all decision alternatives and possible consequences are laid out in a decision tree. Probabilities and values are associated with the outcomes, including the outcome of contamination. The process of allocating probabilities, which could not be made perfectly unambiguous and systematic, is replaced by decomposition and optimization techniques based on principles of dynamic programming. Thus, the new methodology provides logical integration of all available information and allows selection of the best strategy consistent with quarantine and other space exploration goals.
Button, Le; Peter, Beate; Stoel-Gammon, Carol; Raskind, Wendy H
2013-03-01
The purpose of this study was to address the hypothesis that childhood apraxia of speech (CAS) is influenced by an underlying deficit in sequential processing that is also expressed in other modalities. In a sample of 21 adults from five multigenerational families, 11 with histories of various familial speech sound disorders, 3 biologically related adults from a family with familial CAS showed motor sequencing deficits in an alternating motor speech task. Compared with the other adults, these three participants showed deficits in tasks requiring high loads of sequential processing, including nonword imitation, nonword reading and spelling. Qualitative error analyses in real word and nonword imitations revealed group differences in phoneme sequencing errors. Motor sequencing ability was correlated with phoneme sequencing errors during real word and nonword imitation, reading and spelling. Correlations were characterized by extremely high scores in one family and extremely low scores in another. Results are consistent with a central deficit in sequential processing in CAS of familial origin.
Devaluation and sequential decisions: linking goal-directed and model-based behavior
Friedel, Eva; Koch, Stefan P.; Wendt, Jean; Heinz, Andreas; Deserno, Lorenz; Schlagenhauf, Florian
2014-01-01
In experimental psychology different experiments have been developed to assess goal–directed as compared to habitual control over instrumental decisions. Similar to animal studies selective devaluation procedures have been used. More recently sequential decision-making tasks have been designed to assess the degree of goal-directed vs. habitual choice behavior in terms of an influential computational theory of model-based compared to model-free behavioral control. As recently suggested, different measurements are thought to reflect the same construct. Yet, there has been no attempt to directly assess the construct validity of these different measurements. In the present study, we used a devaluation paradigm and a sequential decision-making task to address this question of construct validity in a sample of 18 healthy male human participants. Correlational analysis revealed a positive association between model-based choices during sequential decisions and goal-directed behavior after devaluation suggesting a single framework underlying both operationalizations and speaking in favor of construct validity of both measurement approaches. Up to now, this has been merely assumed but never been directly tested in humans. PMID:25136310
The parallel-sequential field subtraction techniques for nonlinear ultrasonic imaging
NASA Astrophysics Data System (ADS)
Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.
2018-04-01
Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage and have sensitivity to particularly closed defects. This study utilizes two modes of focusing: parallel, in which the elements are fired together with a delay law, and sequential, in which elements are fired independently. In the parallel focusing, a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded; with elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images formed from the coherent component of the field and use this to characterize nonlinearity of closed fatigue cracks. In particular we monitor the reduction in amplitude at the fundamental frequency at each focal point and use this metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g., back wall or large scatters) and allow damage to be detected at an early stage.
Judgments relative to patterns: how temporal sequence patterns affect judgments and memory.
Kusev, Petko; Ayton, Peter; van Schaik, Paul; Tsaneva-Atanasova, Krasimira; Stewart, Neil; Chater, Nick
2011-12-01
Six experiments studied relative frequency judgment and recall of sequentially presented items drawn from 2 distinct categories (i.e., city and animal). The experiments show that judged frequencies of categories of sequentially encountered stimuli are affected by certain properties of the sequence configuration. We found (a) a first-run effect whereby people overestimated the frequency of a given category when that category was the first repeated category to occur in the sequence and (b) a dissociation between judgments and recall; respondents may judge 1 event more likely than the other and yet recall more instances of the latter. Specifically, the distribution of recalled items does not correspond to the frequency estimates for the event categories, indicating that participants do not make frequency judgments by sampling their memory for individual items as implied by other accounts such as the availability heuristic (Tversky & Kahneman, 1973) and the availability process model (Hastie & Park, 1986). We interpret these findings as reflecting the operation of a judgment heuristic sensitive to sequential patterns and offer an account for the relationship between memory and judged frequencies of sequentially encountered stimuli.
BUTTON, LE; PETER, BEATE; STOEL-GAMMON, CAROL; RASKIND, WENDY H.
2013-01-01
The purpose of this study was to address the hypothesis that childhood apraxia of speech (CAS) is influenced by an underlying deficit in sequential processing that is also expressed in other modalities. In a sample of 21 adults from five multigenerational families, 11 with histories of various familial speech sound disorders, 3 biologically related adults from a family with familial CAS showed motor sequencing deficits in an alternating motor speech task. Compared with the other adults, these three participants showed deficits in tasks requiring high loads of sequential processing, including nonword imitation, nonword reading and spelling. Qualitative error analyses in real word and nonword imitations revealed group differences in phoneme sequencing errors. Motor sequencing ability was correlated with phoneme sequencing errors during real word and nonword imitation, reading and spelling. Correlations were characterized by extremely high scores in one family and extremely low scores in another. Results are consistent with a central deficit in sequential processing in CAS of familial origin. PMID:23339292
Progress research of non-Cz silicon material
NASA Technical Reports Server (NTRS)
Campbell, R. B.
1983-01-01
The simultaneous diffusion of liquid boron and liquid phosphorus dopants into N-type dendritic silicon web for solar cells was investigated. It is planned that the diffusion parameters required to achieve the desired P(+)NN(+) cell structure be determined and the resultant cell properties be compared to cells produced in a sequential differential process. A cost analysis of the simultaneous junction formation process is proposed.
The Domino Way to Heterocycles
Padwa, Albert; Bur, Scott K.
2007-01-01
Sequential transformations enable the facile synthesis of complex target molecules from simple building blocks in a single preparative step. Their value is amplified if they also create multiple stereogenic centers. In the ongoing search for new domino processes, emphasis is usually placed on sequential reactions which occur cleanly and without forming by-products. As a prerequisite for an ideally proceeding one-pot sequential transformation, the reactivity pattern of all participating components has to be such that each building block gets involved in a reaction only when it is supposed to do so. The development of sequences that combine transformations of fundamentally different mechanisms broadens the scope of such procedures in synthetic chemistry. This mini review contains a representative sampling from the last 15 years on the kinds of reactions that have been sequenced into cascades to produce heterocyclic molecules. PMID:17940591
Some controversial multiple testing problems in regulatory applications.
Hung, H M James; Wang, Sue-Jane
2009-01-01
Multiple testing problems in regulatory applications are often more challenging than the problems of handling a set of mathematical symbols representing multiple null hypotheses under testing. In the union-intersection setting, it is important to define a family of null hypotheses relevant to the clinical questions at issue. The distinction between primary endpoint and secondary endpoint needs to be considered properly in different clinical applications. Without proper consideration, the widely used sequential gate keeping strategies often impose too many logical restrictions to make sense, particularly to deal with the problem of testing multiple doses and multiple endpoints, the problem of testing a composite endpoint and its component endpoints, and the problem of testing superiority and noninferiority in the presence of multiple endpoints. Partitioning the null hypotheses involved in closed testing into clinical relevant orderings or sets can be a viable alternative to resolving the illogical problems requiring more attention from clinical trialists in defining the clinical hypotheses or clinical question(s) at the design stage. In the intersection-union setting there is little room for alleviating the stringency of the requirement that each endpoint must meet the same intended alpha level, unless the parameter space under the null hypothesis can be substantially restricted. Such restriction often requires insurmountable justification and usually cannot be supported by the internal data. Thus, a possible remedial approach to alleviate the possible conservatism as a result of this requirement is a group-sequential design strategy that starts with a conservative sample size planning and then utilizes an alpha spending function to possibly reach the conclusion early.
Chao, Pei‐Ju; Ting, Hui‐Min; Lo, Su‐Hua; Wang, Yu‐Wen; Tuan, Chiu‐Ching; Fang, Fu‐Min
2011-01-01
The purpose of this study was to evaluate and quantify the planning performance of SmartArc‐based volumetric‐modulated arc radiotherapy (VMAT) versus fixed‐beam intensity‐modulated radiotherapy (IMRT) for nasopharyngeal carcinoma (NPC) using a sequential mode treatment plan. The plan quality and performance of dual arc‐VMAT (DA‐VMAT) using the Pinnacle3 Smart‐Arc system (clinical version 9.0; Philips, Fitchburg, WI, USA) were evaluated and compared with those of seven‐field (7F)‐IMRT in 18 consecutive NPC patients. Analysis parameters included the conformity index (CI) and homogeneity index (HI) for the planning target volume (PTV), maximum and mean dose, normal tissue complication probability (NTCP) for the specified organs at risk (OARs), and comprehensive quality index (CQI) for an overall evaluation in the 11 OARs. Treatment delivery time, monitor units per fraction (MU/fr), and gamma (Γ3mm,3%) evaluations were also analyzed. DA‐VMAT achieved similar target coverage and slightly better homogeneity than conventional 7F‐IMRT with a similar CI and HI. NTCP values were only significantly lower in the left parotid gland (for xerostomia) for DA‐VMAT plans. The mean value of CQI at 0.98±0.02 indicated a 2% benefit in sparing OARs by DA‐VMAT. The MU/fr used and average delivery times appeared to show improved efficiencies in DA‐VMAT. Each technique demonstrated high accuracy in dose delivery in terms of a high‐quality assurance (QA) passing rate (>98%) of the (Γ3mm,3%) criterion. The major difference between DA‐VMAT and 7F‐IMRT using a sequential mode for treating NPC cases appears to be improved efficiency, resulting in a faster delivery time and the use of fewer MU/fr. PACS number: 87.53.Tf, 87.55.x, 87.55.D, 87.55.dk PMID:22089015
Rapid Decisions From Experience
Zeigenfuse, Matthew D.; Pleskac, Timothy J.; Liu, Taosheng
2014-01-01
In many everyday decisions, people quickly integrate noisy samples of information to form a preference among alternatives that offer uncertain rewards. Here, we investigated this decision process using the Flash Gambling Task (FGT), in which participants made a series of choices between a certain payoff and an uncertain alternative that produced a normal distribution of payoffs. For each choice, participants experienced the distribution of payoffs via rapid samples updated every 50 ms. We show that people can make these rapid decisions from experience and that the decision process is consistent with a sequential sampling process. Results also reveal a dissociation between these preferential decisions and equivalent perceptual decisions where participants had to determine which alternatives contained more dots on average. To account for this dissociation, we developed a sequential sampling rank-dependent utility model, which showed that participants in the FGT attended more to larger potential payoffs than participants in the perceptual task despite being given equivalent information. We discuss the implications of these findings in terms of computational models of preferential choice and a more complete understanding of experience-based decision making. PMID:24549141
Co-expression of HoxA9 and bcr-abl genes in chronic myeloid leukemia.
Tedeschi, Fabián A; Cardozo, Maria A; Valentini, Rosanna; Zalazar, Fabián E
2010-05-01
We have analyzed the co-expression of the bcr-abl and HoxA9 genes in the follow-up of patients with chronic myeloid leukemia (CML). In the present work we measured the HoxA9 and bcr-abl gene expression in sequential samples. In all patients, bcr-abl and HoxA9 were expressed at detectable levels in every sample. When the results were expressed in relation to abl, two different situations were found: (a) patients clinically stable at second sampling, with low relative risk at diagnosis (low Sokal's score), did not show significant differences in both bcr-abl and HoxA9 levels in the sequential samples analyzed, and (b) patients with poor prognosis (showing intermediate or high Sokal's score at diagnosis) had increased expression of bcr-abl as well as HoxA9 genes (p < 0.05). Since HoxA9 gene expression remains at relatively constant levels throughout adult life, our results could reflect actual changes in the expression rate of this gene associated with bcr-abl during the progression of CML.
NASA Astrophysics Data System (ADS)
Pragourpun, Kraivinee; Sakee, Uthai; Fernandez, Carlos; Kruanetr, Senee
2015-05-01
We present for the first time the use of deferiprone as a non-toxic complexing agent for the determination of iron by sequential injection analysis in pharmaceuticals and food samples. The method was based on the reaction of Fe(III) and deferiprone in phosphate buffer at pH 7.5 to give a Fe(III)-deferiprone complex, which showed a maximum absorption at 460 nm. Under the optimum conditions, the linearity range for iron determination was found over the range of 0.05-3.0 μg mL-1 with a correlation coefficient (r2) of 0.9993. The limit of detection and limit of quantitation were 0.032 μg mL-1 and 0.055 μg mL-1, respectively. The relative standard deviation (%RSD) of the method was less than 5.0% (n = 11), and the percentage recovery was found in the range of 96.0-104.0%. The proposed method was satisfactorily applied for the determination of Fe(III) in pharmaceuticals, water and food samples with a sampling rate of 60 h-1.
Brain mechanisms controlling decision making and motor planning.
Ramakrishnan, Arjun; Murthy, Aditya
2013-01-01
Accumulator models of decision making provide a unified framework to understand decision making and motor planning. In these models, the evolution of a decision is reflected in the accumulation of sensory information into a motor plan that reaches a threshold, leading to choice behavior. While these models provide an elegant framework to understand performance and reaction times, their ability to explain complex behaviors such as decision making and motor control of sequential movements in dynamic environments is unclear. To examine and probe the limits of online modification of decision making and motor planning, an oculomotor "redirect" task was used. Here, subjects were expected to change their eye movement plan when a new saccade target appeared. Based on task performance, saccade reaction time distributions, computational models of behavior, and intracortical microstimulation of monkey frontal eye fields, we show how accumulator models can be tested and extended to study dynamic aspects of decision making and motor control. Copyright © 2013 Elsevier B.V. All rights reserved.
Leslie, Laurel K; Maciolek, Susan; Biebel, Kathleen; Debordes-Jackson, Gifty; Nicholson, Joanne
2014-11-01
This case study explored core components of knowledge exchange among researchers, policymakers, and practitioners within the context of the Rosie D. versus Romney class action lawsuit in Massachusetts and the development and implementation of its remedial plan. We identified three distinct, sequential knowledge exchange episodes with different purposes, stakeholders, and knowledge exchanged, as decision-making moved from Federal Medicaid policy to state Medicaid program standards and to community-level practice. The knowledge exchanged included research regarding Wraparound, a key component of the remedial plan, as well as contextual information critical for implementation (e.g., Federal Medicaid policy, managed care requirements, community organizations' characteristics).
Verification of hypergraph states
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki; Takeuchi, Yuki; Hayashi, Masahito
2017-12-01
Hypergraph states are generalizations of graph states where controlled-Z gates on edges are replaced with generalized controlled-Z gates on hyperedges. Hypergraph states have several advantages over graph states. For example, certain hypergraph states, such as the Union Jack states, are universal resource states for measurement-based quantum computing with only Pauli measurements, while graph state measurement-based quantum computing needs non-Clifford basis measurements. Furthermore, it is impossible to classically efficiently sample measurement results on hypergraph states unless the polynomial hierarchy collapses to the third level. Although several protocols have been proposed to verify graph states with only sequential single-qubit Pauli measurements, there was no verification method for hypergraph states. In this paper, we propose a method for verifying a certain class of hypergraph states with only sequential single-qubit Pauli measurements. Importantly, no i.i.d. property of samples is assumed in our protocol: any artificial entanglement among samples cannot fool the verifier. As applications of our protocol, we consider verified blind quantum computing with hypergraph states, and quantum computational supremacy demonstrations with hypergraph states.
de Oliveira, Fabio Santos; Korn, Mauro
2006-01-15
A sensitive SIA method was developed for sulphate determination in automotive fuel ethanol. This method was based on the reaction of sulphate with barium-dimethylsulphonazo(III) leading to a decrease on the magnitude of analytical signal monitored at 665 nm. Alcohol fuel samples were previously burned up to avoid matrix effects for sulphate determinations. Binary sampling and stop-flow strategies were used to increase the sensitivity of the method. The optimization of analytical parameter was performed by response surface method using Box-Behnker and central composite designs. The proposed sequential flow procedure permits to determine up to 10.0mg SO(4)(2-)l(-1) with R.S.D. <2.5% and limit of detection of 0.27 mg l(-1). The method has been successfully applied for sulphate determination in automotive fuel alcohol and the results agreed with the reference volumetric method. In the optimized condition the SIA system carried out 27 samples per hour.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alonso, E.; Aparicio, I.; Santos, J.L.
2009-01-15
The content of heavy metals is the major limitation to the application of sewage sludge in soil. However, assessment of the pollution by total metal determination does not reveal the true environmental impact. It is necessary to apply sequential extraction techniques to obtain suitable information about their bioavailability or toxicity. In this paper, sequential extraction of metals from sludge before and after aerobic digestion was applied to sludge from five WWTPs in southern Spain to obtain information about the influence of the digestion treatment in the concentration of the metals. The percentage of each metal as residual, oxidizable, reducible andmore » exchangeable form was calculated. For this purpose, sludge samples were collected from two different points of the plants, namely, sludge from the mixture (primary and secondary sludge) tank (mixed sludge, MS) and the digested-dewatered sludge (final sludge, FS). Heavy metals, Al, Cd, Co, Cr, Cu, Fe, Hg, Mn, Mo, Ni, Pb, Ti and Zn, were extracted following the sequential extraction scheme proposed by the Standards, Measurements and Testing Programme of the European Commission and determined by inductively-coupled plasma atomic emission spectrometry. The total concentration of heavy metals in the measured sludge samples did not exceed the limits set out by European legislation and were mainly associated with the two less-available fractions (27-28% as oxidizable metal and 44-50% as residual metal). However, metals as Co (64% in MS and 52% in FS samples), Mn (82% in MS and 79% in FS), Ni (32% in MS and 26% in FS) and Zn (79% in MS and 62% in FS) were present at important percentages as available forms. In addition, results showed a clear increase of the concentration of metals after sludge treatment in the proportion of two less-available fractions (oxidizable and residual metal)« less
Alonso, E; Aparicio, I; Santos, J L; Villar, P; Santos, A
2009-01-01
The content of heavy metals is the major limitation to the application of sewage sludge in soil. However, assessment of the pollution by total metal determination does not reveal the true environmental impact. It is necessary to apply sequential extraction techniques to obtain suitable information about their bioavailability or toxicity. In this paper, sequential extraction of metals from sludge before and after aerobic digestion was applied to sludge from five WWTPs in southern Spain to obtain information about the influence of the digestion treatment in the concentration of the metals. The percentage of each metal as residual, oxidizable, reducible and exchangeable form was calculated. For this purpose, sludge samples were collected from two different points of the plants, namely, sludge from the mixture (primary and secondary sludge) tank (mixed sludge, MS) and the digested-dewatered sludge (final sludge, FS). Heavy metals, Al, Cd, Co, Cr, Cu, Fe, Hg, Mn, Mo, Ni, Pb, Ti and Zn, were extracted following the sequential extraction scheme proposed by the Standards, Measurements and Testing Programme of the European Commission and determined by inductively-coupled plasma atomic emission spectrometry. The total concentration of heavy metals in the measured sludge samples did not exceed the limits set out by European legislation and were mainly associated with the two less-available fractions (27-28% as oxidizable metal and 44-50% as residual metal). However, metals as Co (64% in MS and 52% in FS samples), Mn (82% in MS and 79% in FS), Ni (32% in MS and 26% in FS) and Zn (79% in MS and 62% in FS) were present at important percentages as available forms. In addition, results showed a clear increase of the concentration of metals after sludge treatment in the proportion of two less-available fractions (oxidizable and residual metal).
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
2017-01-09
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
Multilevel sequential Monte Carlo: Mean square error bounds under verifiable conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del Moral, Pierre; Jasra, Ajay; Law, Kody J. H.
We consider the multilevel sequential Monte Carlo (MLSMC) method of Beskos et al. (Stoch. Proc. Appl. [to appear]). This technique is designed to approximate expectations w.r.t. probability laws associated to a discretization. For instance, in the context of inverse problems, where one discretizes the solution of a partial differential equation. The MLSMC approach is especially useful when independent, coupled sampling is not possible. Beskos et al. show that for MLSMC the computational effort to achieve a given error, can be less than independent sampling. In this article we significantly weaken the assumptions of Beskos et al., extending the proofs tomore » non-compact state-spaces. The assumptions are based upon multiplicative drift conditions as in Kontoyiannis and Meyn (Electron. J. Probab. 10 [2005]: 61–123). The assumptions are verified for an example.« less
Nyman, U; Lundberg, I; Hedfors, E; Wahren, M; Pettersson, I
1992-01-01
Sequentially obtained serum samples from 30 patients with connective tissue disease positive for antibody to ribonucleoprotein (RNP) were examined to determine the specificities of IgG and IgM antibodies to snRNP during the disease course using immunoblotting of nuclear extracts. The antibody patterns were correlated with disease activity. The patterns of antibody to snRNP of individual patients were mainly stable during the study but changes in levels of antibody to snRNP were seen corresponding to changes in clinical activity. These results indicate that increased reactivity of serum IgM antibodies against the B/B' proteins seems to precede a clinically evident exacerbation of disease whereas IgG antibody reactivity to the 70 K protein peaks at the time of a disease flare. Images PMID:1485812
Alhusban, Ala A; Gaudry, Adam J; Breadmore, Michael C; Gueven, Nuri; Guijt, Rosanne M
2014-01-03
Cell culture has replaced many in vivo studies because of ethical and regulatory measures as well as the possibility of increased throughput. Analytical assays to determine (bio)chemical changes are often based on end-point measurements rather than on a series of sequential determinations. The purpose of this work is to develop an analytical system for monitoring cell culture based on sequential injection-capillary electrophoresis (SI-CE) with capacitively coupled contactless conductivity detection (C(4)D). The system was applied for monitoring lactate production, an important metabolic indicator, during mammalian cell culture. Using a background electrolyte consisting of 25mM tris(hydroxymethyl)aminomethane, 35mM cyclohexyl-2-aminoethanesulfonic acid with 0.02% poly(ethyleneimine) (PEI) at pH 8.65 and a multilayer polymer coated capillary, lactate could be resolved from other compounds present in media with relative standard deviations 0.07% for intraday electrophoretic mobility and an analysis time of less than 10min. Using the human embryonic kidney cell line HEK293, lactate concentrations in the cell culture medium were measured every 20min over 3 days, requiring only 8.73μL of sample per run. Combining simplicity, portability, automation, high sample throughput, low limits of detection, low sample consumption and the ability to up- and outscale, this new methodology represents a promising technique for near real-time monitoring of chemical changes in diverse cell culture applications. Copyright © 2013 Elsevier B.V. All rights reserved.
Khongpet, Wanpen; Pencharee, Somkid; Puangpila, Chanida; Kradtap Hartwell, Supaporn; Lapanantnoppakhun, Somchai; Jakmunee, Jaroon
2018-01-15
A microfluidic hydrodynamic sequential injection (μHSI) spectrophotometric system was designed and fabricated. The system was built by laser engraving a manifold pattern on an acrylic block and sealing with another flat acrylic plate to form a microfluidic channel platform. The platform was incorporated with small solenoid valves to obtain a portable setup for programmable control of the liquid flow into the channel according to the HSI principle. The system was demonstrated for the determination of phosphate using a molybdenum blue method. An ascorbic acid, standard or sample, and acidic molybdate solutions were sequentially aspirated to fill the channel forming a stack zone before flowing to the detector. Under the optimum condition, a linear calibration graph in the range of 0.1-6mg P L -1 was obtained. The detection limit was 0.1mgL -1 . The system is compact (5.0mm thick, 80mm wide × 140mm long), durable, portable, cost-effective, and consumes little amount of chemicals (83μL each of molybdate and ascorbic acid, 133μL of the sample solution and 1.7mL of water carrier/run). It was applied for the determination of phosphate content in extracted soil samples. The percent recoveries of the analysis were obtained in the range of 91.2-107.3. The results obtained agreed well with those of the batch spectrophotometric method. Copyright © 2017 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hofer, Scott M.; Flaherty, Brian P.; Hoffman, Lesa
2006-01-01
The effect of time-related mean differences on estimates of association in cross-sectional studies has not been widely recognized in developmental and aging research. Cross-sectional studies of samples varying in age have found moderate to high levels of shared age-related variance among diverse age-related measures. These findings may be…
Ng, Ding-Quan; Liu, Shu-Wei; Lin, Yi-Pin
2018-09-15
In this study, a sampling campaign with a total of nine sampling events investigating lead in drinking water was conducted at 7 sampling locations in an old building with lead pipes in service in part of the building on the National Taiwan University campus. This study aims to assess the effectiveness of four different sampling methods, namely first draw sampling, sequential sampling, random daytime sampling and flush sampling, in lead contamination detection. In 3 out of the 7 sampling locations without lead pipe, lead could not be detected (<1.1 μg/L) in most samples regardless of the sampling methods. On the other hand, in the 4 sampling locations where lead pipes still existed, total lead concentrations >10 μg/L were consistently observed in 3 locations using any of the four sampling methods while the remaining location was identified to be contaminated using sequential sampling. High lead levels were consistently measured by the four sampling methods in the 3 locations in which particulate lead was either predominant or comparable to soluble lead. Compared to first draw and random daytime samplings, although flush sampling had a high tendency to reduce total lead in samples in lead-contaminated sites, the extent of lead reduction was location-dependent and not dependent on flush durations between 5 and 10 min. Overall, first draw sampling and random daytime sampling were reliable and effective in determining lead contamination in this study. Flush sampling could reveal the contamination if the extent is severe but tends to underestimate lead exposure risk. Copyright © 2018 Elsevier B.V. All rights reserved.
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Evaluating Multiple Imputation Models for the Southern Annual Forest Inventory
Gregory A. Reams; Joseph M. McCollum
1999-01-01
The USDA Forest Service's Southern Research Station is implementing an annualized forest survey in thirteen states. The sample design is a systematic sample of five interpenetrating grids (panels), where each panel is measured sequentially. For example, panel one information is collected in year one, and panel five in year five. The area representative and time...
Environmental persistence of the nucleopolyhedrosis virus of the gypsy moth, Lymantria dispar L
J.D. Podgwaite; Kathleen Stone Shields; R.T. Zerillo; R.B. Bruen
1979-01-01
A bioassay technique was used to estimate the concentrations of infectious gypsy moth nucleopolyhedrosis virus (NPV) that occur naturaIly in leaf, bark, litter, and soil samples taken from woodland plots in Connecticut and Pennsylvania. These concentrations were then compared to those in samples taken sequentially after treatment of these plots with NPV. Results...
ERIC Educational Resources Information Center
Chan, Wai
2007-01-01
In social science research, an indirect effect occurs when the influence of an antecedent variable on the effect variable is mediated by an intervening variable. To compare indirect effects within a sample or across different samples, structural equation modeling (SEM) can be used if the computer program supports model fitting with nonlinear…
How Big Is Big Enough? Sample Size Requirements for CAST Item Parameter Estimation
ERIC Educational Resources Information Center
Chuah, Siang Chee; Drasgow, Fritz; Luecht, Richard
2006-01-01
Adaptive tests offer the advantages of reduced test length and increased accuracy in ability estimation. However, adaptive tests require large pools of precalibrated items. This study looks at the development of an item pool for 1 type of adaptive administration: the computer-adaptive sequential test. An important issue is the sample size required…
40 CFR 761.302 - Proportion of the total surface area to sample.
Code of Federal Regulations, 2011 CFR
2011-07-01
... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...
40 CFR 761.302 - Proportion of the total surface area to sample.
Code of Federal Regulations, 2014 CFR
2014-07-01
... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...
40 CFR 761.302 - Proportion of the total surface area to sample.
Code of Federal Regulations, 2010 CFR
2010-07-01
... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...
40 CFR 761.302 - Proportion of the total surface area to sample.
Code of Federal Regulations, 2012 CFR
2012-07-01
... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...
40 CFR 761.302 - Proportion of the total surface area to sample.
Code of Federal Regulations, 2013 CFR
2013-07-01
... surface into approximately 1 meter square portions and mark the portions so that they are clearly... surfaces contaminated by a single source of PCBs with a uniform concentration, assign each 1 meter square surface a unique sequential number. (i) For three or fewer 1 meter square areas, sample all of the areas...
Teachers' Adoptation Level of Student Centered Education Approach
ERIC Educational Resources Information Center
Arseven, Zeynep; Sahin, Seyma; Kiliç, Abdurrahman
2016-01-01
The aim of this study is to identify how far the student centered education approach is applied in the primary, middle and high schools in Düzce. Explanatory design which is one type of mixed research methods and "sequential mixed methods sampling" were used in the study. 685 teachers constitute the research sample of the quantitative…
The Cerebellar Deficit Hypothesis and Dyslexic Tendencies in a Non-Clinical Sample
ERIC Educational Resources Information Center
Brookes, Rebecca L.; Stirling, John
2005-01-01
In order to assess the relationship between cerebellar deficits and dyslexic tendencies in a non-clinical sample, 27 primary school children aged 8-9 completed a cerebellar soft signs battery and were additionally assessed for reading age, sequential memory, picture arrangement and knowledge of common sequences. An average measure of the soft…
2006-09-30
allocated to intangible assets. With Proctor & Gamble’s $53.5 billion acquisition of Gillette , $31.5 billion or 59% of the total purchase price was... outsourcing , alliances, joint ventures) • Compound Option (platform options) • Sequential Options (stage-gate development, R&D, phased...Comparisons • RO/KVA could enhance outsourcing comparisons between the Government’s Most Efficient Organization (MEO) and private-sector
A MURI Center for Intelligent Biomimetic Image Processing and Classification
2007-11-01
Grossberg, S., Editor of a new journal on Current Opinion in Cognitive Neurodynamics , 2005. 34. Grossberg, S., Editor of the new International...Department of Cognitive and Neural Systems 677 Beacon Street Boston MA 02215 steve(&bu.edu Phone: 617-353-7857 Fax: 617-353-7755 http://cns.bu.edu...memory, learning of sequential plans, and sequence performance during cognitive information processing; (8) coordinated ballistic and smooth pursuit
Developing Emotion-Based Case Formulations: A Research-Informed Method.
Pascual-Leone, Antonio; Kramer, Ueli
2017-01-01
New research-informed methods for case conceptualization that cut across traditional therapy approaches are increasingly popular. This paper presents a trans-theoretical approach to case formulation based on the research observations of emotion. The sequential model of emotional processing (Pascual-Leone & Greenberg, 2007) is a process research model that provides concrete markers for therapists to observe the emerging emotional development of their clients. We illustrate how this model can be used by clinicians to track change and provides a 'clinical map,' by which therapist may orient themselves in-session and plan treatment interventions. Emotional processing offers as a trans-theoretical framework for therapists who wish to conduct emotion-based case formulations. First, we present criteria for why this research model translates well into practice. Second, two contrasting case studies are presented to demonstrate the method. The model bridges research with practice by using client emotion as an axis of integration. Key Practitioner Message Process research on emotion can offer a template for therapists to make case formulations while using a range of treatment approaches. The sequential model of emotional processing provides a 'process map' of concrete markers for therapists to (1) observe the emerging emotional development of their clients, and (2) help therapists develop a treatment plan. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Matong, Joseph M; Nyaba, Luthando; Nomngongo, Philiswa N
2016-07-01
The main objectives of this study were to determine the concentration of fourteen trace elements and to investigate their distribution as well as a contamination levels in selected agricultural soils. An ultrasonic assisted sequential extraction procedure derived from three-step BCR method was used for fractionation of trace elements. The total concentration of trace elements in soil samples was obtained by total digestion method in soil samples with aqua regia. The results of the extractable fractions revealed that most of the target trace elements can be transferred to the human being through the food chain, thus leading to serious human health. Enrichment factor (EF), geo-accumulation index (Igeo), contamination factor (CF), risk assessment code (RAC) and individual contamination factors (ICF) were used to assess the environmental impacts of trace metals in soil samples. The EF revealed that Cd was enriched by 3.1-7.2 (except in Soil 1). The Igeo results showed that the soils in the study area was moderately contaminated with Fe, and heavily to extremely polluted with Cd. The soil samples from the unplanted field was found to have highest contamination factor for Cd and lowest for Pb. Soil 3 showed a high risk for Tl and Cd with RAC values of greater than or equal to 50%. In addition, Fe, Ni, Cu, V, As, Mo (except Soil 2), Sb and Pb posed low environmental risk. The modified BCR sequential extraction method provided more information about mobility and environmental implication of studied trace elements in the study area. Copyright © 2016 Elsevier Ltd. All rights reserved.
Davletbaeva, Polina; Chocholouš, Petr; Bulatov, Andrey; Šatínský, Dalibor; Solich, Petr
2017-09-05
Sequential Injection Chromatography (SIC) evolved from fast and automated non-separation Sequential Injection Analysis (SIA) into chromatographic separation method for multi-element analysis. However, the speed of the measurement (sample throughput) is due to chromatography significantly reduced. In this paper, a sub-1min separation using medium polar cyano monolithic column (5mm×4.6mm) resulted in fast and green separation with sample throughput comparable with non-separation flow methods The separation of three synthetic water-soluble dyes (sunset yellow FCF, carmoisine and green S) was in a gradient elution mode (0.02% ammonium acetate, pH 6.7 - water) with flow rate of 3.0mLmin -1 corresponding with sample throughput of 30h -1 . Spectrophotometric detection wavelengths were set to 480, 516 and 630nm and 10Hz data collection rate. The performance of the separation was described and discussed (peak capacities 3.48-7.67, peak symmetries 1.72-1.84 and resolutions 1.42-1.88). The method was represented by validation parameters: LODs of 0.15-0.35mgL -1 , LOQs of 0.50-1.25mgL -1 , calibration ranges 0.50-150.00mgL -1 (r>0.998) and repeatability at 10.0mgL -1 of RSD≤0.98% (n=6). The method was used for determination of the dyes in "forest berries" colored pharmaceutical cough-cold formulation. The sample matrix - pharmaceuticals and excipients were not interfering with vis determination because of no retention in the separation column and colorless nature. The results proved the concept of fast and green chromatography approach using very short medium polar monolithic column in SIC. Copyright © 2017 Elsevier B.V. All rights reserved.
Meyer, Marjolaine D; Terry, Leon A
2008-08-27
Methods devised for oil extraction from avocado (Persea americana Mill.) mesocarp (e.g., Soxhlet) are usually lengthy and require operation at high temperature. Moreover, methods for extracting sugars from avocado tissue (e.g., 80% ethanol, v/v) do not allow for lipids to be easily measured from the same sample. This study describes a new simple method that enabled sequential extraction and subsequent quantification of both fatty acids and sugars from the same avocado mesocarp tissue sample. Freeze-dried mesocarp samples of avocado cv. Hass fruit of different ripening stages were extracted by homogenization with hexane and the oil extracts quantified for fatty acid composition by GC. The resulting filter residues were readily usable for sugar extraction with methanol (62.5%, v/v). For comparison, oil was also extracted using the standard Soxhlet technique and the resulting thimble residue extracted for sugars as before. An additional experiment was carried out whereby filter residues were also extracted using ethanol. Average oil yield using the Soxhlet technique was significantly (P < 0.05) higher than that obtained by homogenization with hexane, although the difference remained very slight, and fatty acid profiles of the oil extracts following both methods were very similar. Oil recovery improved with increasing ripeness of the fruit with minor differences observed in the fatty acid composition during postharvest ripening. After lipid removal, methanolic extraction was superior in recovering sucrose and perseitol as compared to 80% ethanol (v/v), whereas mannoheptulose recovery was not affected by solvent used. The method presented has the benefits of shorter extraction time, lower extraction temperature, and reduced amount of solvent and can be used for sequential extraction of fatty acids and sugars from the same sample.
Particle filters, a quasi-Monte-Carlo-solution for segmentation of coronaries.
Florin, Charles; Paragios, Nikos; Williams, Jim
2005-01-01
In this paper we propose a Particle Filter-based approach for the segmentation of coronary arteries. To this end, successive planes of the vessel are modeled as unknown states of a sequential process. Such states consist of the orientation, position, shape model and appearance (in statistical terms) of the vessel that are recovered in an incremental fashion, using a sequential Bayesian filter (Particle Filter). In order to account for bifurcations and branchings, we consider a Monte Carlo sampling rule that propagates in parallel multiple hypotheses. Promising results on the segmentation of coronary arteries demonstrate the potential of the proposed approach.
Liu, Yang; Luo, Zhi-Qiang; Lv, Bei-Ran; Zhao, Hai-Yu; Dong, Ling
2016-04-01
The multiple components in Chinese herbal medicines (CHMS) will experience complex absorption and metabolism before entering the blood system. Previous studies often lay emphasis on the components in blood. However, the dynamic and sequential absorption and metabolism process following multi-component oral administration has not been studied. In this study, the in situ closed-loop method combined with LC-MS techniques were employed to study the sequential process of Chuanxiong Rhizoma decoction (RCD). A total of 14 major components were identified in RCD. Among them, ferulic acid, senkyunolide J, senkyunolide I, senkyunolide F, senkyunolide G, and butylidenephthalide were detected in all of the samples, indicating that the six components could be absorbed into blood in prototype. Butylphthalide, E-ligustilide, Z-ligustilide, cnidilide, senkyunolide A and senkyunolide Q were not detected in all the samples, suggesting that the six components may not be absorbed or metabolized before entering the hepatic portal vein. Senkyunolide H could be metabolized by the liver, while senkyunolide M could be metabolized by both liver and intestinal flora. This study clearly demonstrated the changes in the absorption and metabolism process following multi-component oral administration of RCD, so as to convert the static multi-component absorption process into a comprehensive dynamic and continuous absorption and metabolism process. Copyright© by the Chinese Pharmaceutical Association.
Simultaneous capture and sequential detection of two malarial biomarkers on magnetic microparticles.
Markwalter, Christine F; Ricks, Keersten M; Bitting, Anna L; Mudenda, Lwiindi; Wright, David W
2016-12-01
We have developed a rapid magnetic microparticle-based detection strategy for malarial biomarkers Plasmodium lactate dehydrogenase (pLDH) and Plasmodium falciparum histidine-rich protein II (PfHRPII). In this assay, magnetic particles functionalized with antibodies specific for pLDH and PfHRPII as well as detection antibodies with distinct enzymes for each biomarker are added to parasitized lysed blood samples. Sandwich complexes for pLDH and PfHRPII form on the surface of the magnetic beads, which are washed and sequentially re-suspended in detection enzyme substrate for each antigen. The developed simultaneous capture and sequential detection (SCSD) assay detects both biomarkers in samples as low as 2.0parasites/µl, an order of magnitude below commercially available ELISA kits, has a total incubation time of 35min, and was found to be reproducible between users over time. This assay provides a simple and efficient alternative to traditional 96-well plate ELISAs, which take 5-8h to complete and are limited to one analyte. Further, the modularity of the magnetic bead-based SCSD ELISA format could serve as a platform for application to other diseases for which multi-biomarker detection is advantageous. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Cortical Components of Reaction-Time during Perceptual Decisions in Humans.
Dmochowski, Jacek P; Norcia, Anthony M
2015-01-01
The mechanisms of perceptual decision-making are frequently studied through measurements of reaction time (RT). Classical sequential-sampling models (SSMs) of decision-making posit RT as the sum of non-overlapping sensory, evidence accumulation, and motor delays. In contrast, recent empirical evidence hints at a continuous-flow paradigm in which multiple motor plans evolve concurrently with the accumulation of sensory evidence. Here we employ a trial-to-trial reliability-based component analysis of encephalographic data acquired during a random-dot motion task to directly image continuous flow in the human brain. We identify three topographically distinct neural sources whose dynamics exhibit contemporaneous ramping to time-of-response, with the rate and duration of ramping discriminating fast and slow responses. Only one of these sources, a parietal component, exhibits dependence on strength-of-evidence. The remaining two components possess topographies consistent with origins in the motor system, and their covariation with RT overlaps in time with the evidence accumulation process. After fitting the behavioral data to a popular SSM, we find that the model decision variable is more closely matched to the combined activity of the three components than to their individual activity. Our results emphasize the role of motor variability in shaping RT distributions on perceptual decision tasks, suggesting that physiologically plausible computational accounts of perceptual decision-making must model the concurrent nature of evidence accumulation and motor planning.
Planning multi-arm screening studies within the context of a drug development program
Wason, James M S; Jaki, Thomas; Stallard, Nigel
2013-01-01
Screening trials are small trials used to decide whether an intervention is sufficiently promising to warrant a large confirmatory trial. Previous literature examined the situation where treatments are tested sequentially until one is considered sufficiently promising to take forward to a confirmatory trial. An important consideration for sponsors of clinical trials is how screening trials should be planned to maximize the efficiency of the drug development process. It has been found previously that small screening trials are generally the most efficient. In this paper we consider the design of screening trials in which multiple new treatments are tested simultaneously. We derive analytic formulae for the expected number of patients until a successful treatment is found, and propose methodology to search for the optimal number of treatments, and optimal sample size per treatment. We compare designs in which only the best treatment proceeds to a confirmatory trial and designs in which multiple treatments may proceed to a multi-arm confirmatory trial. We find that inclusion of a large number of treatments in the screening trial is optimal when only one treatment can proceed, and a smaller number of treatments is optimal when more than one can proceed. The designs we investigate are compared on a real-life set of screening designs. Copyright © 2013 John Wiley & Sons, Ltd. PMID:23529936
Sequential pattern formation governed by signaling gradients
NASA Astrophysics Data System (ADS)
Jörg, David J.; Oates, Andrew C.; Jülicher, Frank
2016-10-01
Rhythmic and sequential segmentation of the embryonic body plan is a vital developmental patterning process in all vertebrate species. However, a theoretical framework capturing the emergence of dynamic patterns of gene expression from the interplay of cell oscillations with tissue elongation and shortening and with signaling gradients, is still missing. Here we show that a set of coupled genetic oscillators in an elongating tissue that is regulated by diffusing and advected signaling molecules can account for segmentation as a self-organized patterning process. This system can form a finite number of segments and the dynamics of segmentation and the total number of segments formed depend strongly on kinetic parameters describing tissue elongation and signaling molecules. The model accounts for existing experimental perturbations to signaling gradients, and makes testable predictions about novel perturbations. The variety of different patterns formed in our model can account for the variability of segmentation between different animal species.
Misestimating betting behavior: the role of negative asymmetries in emotional self prediction.
Andrade, Eduardo B; Claro, Danny P; Islam, Gazi
2014-12-01
This paper addresses the role of negative asymmetries in emotional self-prediction by looking at the extent to which individuals misestimate their own betting behavior in sequential gambles. In a series of three experimental studies, we demonstrate that losses lead to higher than planned bets whereas bets are on average carried over after gains. Such asymmetric deviations from the plan emerge (1) when monetary and non-monetary incentives are used, and (2) when participants face fair and unfair gambles. The asymmetry is based on people's inability to predict how much the negative emotions generated by a bad experience (e.g. the loss) will influence them to put more effort (e.g. bet more) than planned in an attempt to re-establish a homeostatic state in the prospect of a good experience (e.g. winning).
Tracking the Short Term Planning (STP) Development Process
NASA Technical Reports Server (NTRS)
Price, Melanie; Moore, Alexander
2010-01-01
Part of the National Aeronautics and Space Administration?s mission is to pioneer the future in space exploration, scientific discovery and aeronautics research is enhanced by discovering new scientific tools to improve life on earth. Sequentially, to successfully explore the unknown, there has to be a planning process that organizes certain events in the right priority. Therefore, the planning support team has to continually improve their processes so the ISS Mission Operations can operate smoothly and effectively. The planning support team consists of people in the Long Range Planning area that develop timelines that includes International Partner?s Preliminary STP inputs all the way through to publishing of the Final STP. Planning is a crucial part of the NASA community when it comes to planning the astronaut?s daily schedule in great detail. The STP Process is in need of improvement, because of the various tasks that are required to be broken down in order to get the overall objective of developing a Final STP done correctly. Then a new project came along in order to store various data in a more efficient database. "The SharePoint site is a Web site that provides a central storage and collaboration space for documents, information, and ideas."
7 CFR 43.104 - Master table of single and double sampling plans.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Master table of single and double sampling plans. 43... STANDARD CONTAINER REGULATIONS STANDARDS FOR SAMPLING PLANS Sampling Plans § 43.104 Master table of single and double sampling plans. (a) In the master table, a sampling plan is selected by first determining...
7 CFR 43.104 - Master table of single and double sampling plans.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Master table of single and double sampling plans. 43... STANDARD CONTAINER REGULATIONS STANDARDS FOR SAMPLING PLANS Sampling Plans § 43.104 Master table of single and double sampling plans. (a) In the master table, a sampling plan is selected by first determining...
7 CFR 43.104 - Master table of single and double sampling plans.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Master table of single and double sampling plans. 43... STANDARD CONTAINER REGULATIONS STANDARDS FOR SAMPLING PLANS Sampling Plans § 43.104 Master table of single and double sampling plans. (a) In the master table, a sampling plan is selected by first determining...
7 CFR 43.104 - Master table of single and double sampling plans.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Master table of single and double sampling plans. 43... STANDARD CONTAINER REGULATIONS STANDARDS FOR SAMPLING PLANS Sampling Plans § 43.104 Master table of single and double sampling plans. (a) In the master table, a sampling plan is selected by first determining...
NASA Astrophysics Data System (ADS)
Dittes, Beatrice; Kaiser, Maria; Špačková, Olga; Rieger, Wolfgang; Disse, Markus; Straub, Daniel
2018-05-01
Planning authorities are faced with a range of questions when planning flood protection measures: is the existing protection adequate for current and future demands or should it be extended? How will flood patterns change in the future? How should the uncertainty pertaining to this influence the planning decision, e.g., for delaying planning or including a safety margin? Is it sufficient to follow a protection criterion (e.g., to protect from the 100-year flood) or should the planning be conducted in a risk-based way? How important is it for flood protection planning to accurately estimate flood frequency (changes), costs and damage? These are questions that we address for a medium-sized pre-alpine catchment in southern Germany, using a sequential Bayesian decision making framework that quantitatively addresses the full spectrum of uncertainty. We evaluate different flood protection systems considered by local agencies in a test study catchment. Despite large uncertainties in damage, cost and climate, the recommendation is robust for the most conservative approach. This demonstrates the feasibility of making robust decisions under large uncertainty. Furthermore, by comparison to a previous study, it highlights the benefits of risk-based planning over the planning of flood protection to a prescribed return period.
Developing a Systematic Corrosion Control Evaluation Approach in Flint
Presentation covers what the projects were that were recommended by the Flint Safe Drinking Water Task Force for corrosion control assessment for Flint, focusing on the sequential sampling project, the pipe rigs, and pipe scale analyses.
A common mechanism underlies changes of mind about decisions and confidence.
van den Berg, Ronald; Anandalingam, Kavitha; Zylberberg, Ariel; Kiani, Roozbeh; Shadlen, Michael N; Wolpert, Daniel M
2016-02-01
Decisions are accompanied by a degree of confidence that a selected option is correct. A sequential sampling framework explains the speed and accuracy of decisions and extends naturally to the confidence that the decision rendered is likely to be correct. However, discrepancies between confidence and accuracy suggest that confidence might be supported by mechanisms dissociated from the decision process. Here we show that this discrepancy can arise naturally because of simple processing delays. When participants were asked to report choice and confidence simultaneously, their confidence, reaction time and a perceptual decision about motion were explained by bounded evidence accumulation. However, we also observed revisions of the initial choice and/or confidence. These changes of mind were explained by a continuation of the mechanism that led to the initial choice. Our findings extend the sequential sampling framework to vacillation about confidence and invites caution in interpreting dissociations between confidence and accuracy.
Bramley, Kyle; Pisani, Margaret A.; Murphy, Terrence E.; Araujo, Katy; Homer, Robert; Puchalski, Jonathan
2016-01-01
Background EBUS-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, when larger “core” biopsy samples of malignant tissue are required, TBNA may not suffice. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsies (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. Methods Fifty unselected patients undergoing convex probe EBUS were prospectively enrolled. Under EBUS guidance, all lymph nodes ≥ 1 cm were sequentially biopsied using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported on a per-patient basis. Results There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). For analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis was based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis was based only on TBNA samples. In some cases only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. Conclusions The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided specimens for clinical trials of malignancy when needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. PMID:26912301
Bramley, Kyle; Pisani, Margaret A; Murphy, Terrence E; Araujo, Katy L; Homer, Robert J; Puchalski, Jonathan T
2016-05-01
Endobronchial ultrasound (EBUS)-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, TBNA may not suffice when larger "core biopsy" samples of malignant tissue are required. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsy (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. The study prospectively enrolled 50 unselected patients undergoing convex-probe EBUS. All lymph nodes exceeding 1 cm were sequentially biopsied under EBUS guidance using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported for each patient. There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). On the one hand, for analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis were based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis were based only on TBNA samples. In some patients, only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided adequate specimens for clinical trials of malignancy when specimens from needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Elban, Mehmet
2017-01-01
The purpose of this research is to evaluate the teaching and educational activities in the civilization history lesson. The model of the research is the exploratory sequential design from mixed research patterns. The appropriate sampling method was used in the research. The qualitative data of the research were collected from 26 students through a…
XANES Spectroscopic Analysis of Phosphorus Speciation in Alum-Amended Poultry Litter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seiter,J.; Staats-Borda, K.; Ginder-Vogel, M.
2008-01-01
Aluminum sulfate (alum; Al2(SO4)3{center_dot}14H2O) is used as a chemical treatment of poultry litter to reduce the solubility and release of phosphate, thereby minimizing the impacts on adjacent aquatic ecosystems when poultry litter is land applied as a crop fertilizer. The objective of this study was to determine, through the use of X-ray absorption near edge structure (XANES) spectroscopy and sequential extraction, how alum amendments alter P distribution and solid-state speciation within the poultry litter system. Our results indicate that traditional sequential fractionation procedures may not account for variability in P speciation in heterogeneous animal manures. Analysis shows that NaOH-extracted Pmore » in alum amended litters is predominantly organic ({approx}80%), whereas in the control samples, >60% of NaOH-extracted P was inorganic P. Linear least squares fitting (LLSF) analysis of spectra collected of sequentially extracted litters showed that the P is present in inorganic (P sorbed on Al oxides, calcium phosphates) and organic forms (phytic acid, polyphosphates, and monoesters) in alum- and non-alum-amended poultry litter. When determining land application rates of poultry litter, all of these compounds must be considered, especially organic P. Results of the sequential extractions in conjunction with LLSF suggest that no P species is completely removed by a single extractant. Rather, there is a continuum of removal as extractant strength increases. Overall, alum-amended litters exhibited higher proportions of Al-bound P species and phytic acid, whereas untreated samples contained Ca-P minerals and organic P compounds. This study provides in situ information about P speciation in the poultry litter solid and about P availability in alum- and non-alum-treated poultry litter that will dictate P losses to ground and surface water systems.« less
NASA Technical Reports Server (NTRS)
Chapman, G. M. (Principal Investigator); Carnes, J. G.
1981-01-01
Several techniques which use clusters generated by a new clustering algorithm, CLASSY, are proposed as alternatives to random sampling to obtain greater precision in crop proportion estimation: (1) Proportional Allocation/relative count estimator (PA/RCE) uses proportional allocation of dots to clusters on the basis of cluster size and a relative count cluster level estimate; (2) Proportional Allocation/Bayes Estimator (PA/BE) uses proportional allocation of dots to clusters and a Bayesian cluster-level estimate; and (3) Bayes Sequential Allocation/Bayesian Estimator (BSA/BE) uses sequential allocation of dots to clusters and a Bayesian cluster level estimate. Clustering in an effective method in making proportion estimates. It is estimated that, to obtain the same precision with random sampling as obtained by the proportional sampling of 50 dots with an unbiased estimator, samples of 85 or 166 would need to be taken if dot sets with AI labels (integrated procedure) or ground truth labels, respectively were input. Dot reallocation provides dot sets that are unbiased. It is recommended that these proportion estimation techniques are maintained, particularly the PA/BE because it provides the greatest precision.
NASA Astrophysics Data System (ADS)
Menegário, Amauri A.; Giné, Maria Fernanda
2000-04-01
A synchronised flow system with hydride generation coupled to ICP-MS is proposed for the sequential determination of As and Se in natural waters and plant digests. The alternated mixing of the sample solution with thiourea or HCl for the determination of As or Se under optimized conditions was achieved using a flow commutator before the reaction with NaBH 4. The on-line addition of thiourea promoted the quantitative reduction of As(V) to As(III), thus enhancing sensitivity and precision. The selenium pre-reduction from Se(VI) to Se(IV) was produced by heating the sample with HCl, and the hydride generation was performed in 4 mol l -1 HCl, thus avoiding interference from thiourea. The system allowed the analysis of 20 samples h -1 with LOD values of 0.02 μg l -1 As and 0.03 μg l -1 Se. Results were in agreement with the certified values at the 95% confidence level for reference waters from the Canadian National Water Research Institute and plant samples from the National Institute of Standards and Technology (NIST).
Kim, C.S.; Bloom, N.S.; Rytuba, J.J.; Brown, Gordon E.
2003-01-01
Determining the chemical speciation of mercury in contaminated mining and industrial environments is essential for predicting its solubility, transport behavior, and potential bioavailability as well as for designing effective remediation strategies. In this study, two techniques for determining Hg speciation-X-ray absorption fine structure (XAFS) spectroscopy and sequential chemical extractions (SCE)-are independently applied to a set of samples with Hg concentrations ranging from 132 to 7539 mg/kg to determine if the two techniques provide comparable Hg speciation results. Generally, the proportions of insoluble HgS (cinnabar, metacinnabar) and HgSe identified by XAFS correlate well with the proportion of Hg removed in the aqua regia extraction demonstrated to remove HgS and HgSe. Statistically significant (> 10%) differences are observed however in samples containing more soluble Hg-containing phases (HgCl2, HgO, Hg3S2O 4). Such differences may be related to matrix, particle size, or crystallinity effects, which could affect the apparent solubility of Hg phases present. In more highly concentrated samples, microscopy techniques can help characterize the Hg-bearing species in complex multiphase natural samples.
1992-08-01
programs have several common functional components dealing with: attention , crew, stress, mental attitude, and risk issues. The role which the five...five interrelated concept areas furnish "rules and tools" to help prevent common errors. For instance: 1. Attention management issues include...pilots must manage his/her attention in a timely manor and sequentially employ the other cockpit management tools (for controlling stress etc.). The text
ERIC Educational Resources Information Center
Keith, Timothy Z.; Kranzler, John H.; Flanagan, Dawn P.
2001-01-01
Reports the results of the first joint confirmatory factor analysis (CFA) of the Cognitive Assessment System (CAS) and the Woodcock-Johnson Tests of Cognitive Abilities-3rd Edition (WJ III). Results of these analyses do not support the construct validity of the CAS as a measure of the PASS (planning, attention, simultaneous, and sequential)…
NASA Astrophysics Data System (ADS)
Sahlstedt, Elina; Arppe, Laura
2017-04-01
Stable isotope composition of bones, analysed either from the mineral phase (hydroxyapatite) or from the organic phase (mainly collagen) carry important climatological and ecological information and are therefore widely used in paleontological and archaeological research. For the analysis of the stable isotope compositions, both of the phases, hydroxyapatite and collagen, have their more or less well established separation and analytical techniques. Recent development in IRMS and wet chemical extraction methods have facilitated the analysis of very small bone fractions (500 μg or less starting material) for PO43-O isotope composition. However, the uniqueness and (pre-) historical value of each archaeological and paleontological finding lead to preciously little material available for stable isotope analyses, encouraging further development of microanalytical methods for the use of stable isotope analyses. Here we present the first results in developing extraction methods for combining collagen C- and N-isotope analyses to PO43-O-isotope analyses from a single bone sample fraction. We tested sequential extraction starting with dilute acid demineralization and collection of both collagen and PO43-fractions, followed by further purification step by H2O2 (PO43-fraction). First results show that bone sample separates as small as 2 mg may be analysed for their δ15N, δ13C and δ18OPO4 values. The method may be incorporated in detailed investigation of sequentially developing skeletal material such as teeth, potentially allowing for the investigation of interannual variability in climatological/environmental signals or investigation of the early life history of an individual.
Vuckovic, Anita; Kwantes, Peter J; Neal, Andrew
2013-09-01
Research has identified a wide range of factors that influence performance in relative judgment tasks. However, the findings from this research have been inconsistent. Studies have varied with respect to the identification of causal variables and the perceptual and decision-making mechanisms underlying performance. Drawing on the ecological rationality approach, we present a theory of the judgment and decision-making processes involved in a relative judgment task that explains how people judge a stimulus and adapt their decision process to accommodate their own uncertainty associated with those judgments. Undergraduate participants performed a simulated air traffic control conflict detection task. Across two experiments, we systematically manipulated variables known to affect performance. In the first experiment, we manipulated the relative distances of aircraft to a common destination while holding aircraft speeds constant. In a follow-up experiment, we introduced a direct manipulation of relative speed. We then fit a sequential sampling model to the data, and used the best fitting parameters to infer the decision-making processes responsible for performance. Findings were consistent with the theory that people adapt to their own uncertainty by adjusting their criterion and the amount of time they take to collect evidence in order to make a more accurate decision. From a practical perspective, the paper demonstrates that one can use a sequential sampling model to understand performance in a dynamic environment, allowing one to make sense of and interpret complex patterns of empirical findings that would otherwise be difficult to interpret using standard statistical analyses. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Lemons, B; Khaing, H; Ward, A; Thakur, P
2018-06-01
A new sequential separation method for the determination of polonium and actinides (Pu, Am and U) in drinking water samples has been developed that can be used for emergency response or routine water analyses. For the first time, the application of TEVA chromatography column in the sequential separation of polonium and plutonium has been studied. This method utilizes a rapid Fe +3 co-precipitation step to remove matrix interferences, followed by plutonium oxidation state adjustment to Pu 4+ and an incubation period of ~ 1 h at 50-60 °C to allow Po 2+ to oxidize to Po 4+ . The polonium and plutonium were then separated on a TEVA column, while separation of americium from uranium was performed on a TRU column. After separation, polonium was micro-precipitated with copper sulfide (CuS), while actinides were micro co-precipitated using neodymium fluoride (NdF 3 ) for counting by the alpha spectrometry. The method is simple, robust and can be performed quickly with excellent removal of interferences, high chemical recovery and very good alpha peak resolution. The efficiency and reliability of the procedures were tested by using spiked samples. The effect of several transition metals (Cu 2+ , Pb 2+ , Fe 3+ , Fe 2+ , and Ni 2+ ) on the performance of this method were also assessed to evaluate the potential matrix effects. Studies indicate that presence of up to 25 mg of these cations in the samples had no adverse effect on the recovery or the resolution of polonium alpha peaks. Copyright © 2018 Elsevier Ltd. All rights reserved.
A reverse order interview does not aid deception detection regarding intentions
Fenn, Elise; McGuire, Mollie; Langben, Sara; Blandón-Gitlin, Iris
2015-01-01
Promising recent research suggests that more cognitively demanding interviews improve deception detection accuracy. Would these cognitively demanding techniques work in the same way when discriminating between true and false future intentions? In Experiment 1 participants planned to complete a task, but instead were intercepted and interviewed about their intentions. Participants lied or told the truth, and were subjected to high (reverse order) or low (sequential order) cognitive load interviews. Third-party observers watched these interviews and indicated whether they thought the person was lying or telling the truth. Subjecting participants to a reverse compared to sequential interview increased the misidentification rate and the appearance of cognitive load in truth tellers. People lying about false intentions were not better identified. In Experiment 2, a second set of third-party observers rated behavioral cues. Consistent with Experiment 1, truth tellers, but not liars, exhibited more behaviors associated with lying and fewer behaviors associated with truth telling in the reverse than sequential interview. Together these results suggest that certain cognitively demanding interviews may be less useful when interviewing to detect false intentions. Explaining a true intention while under higher cognitive demand places truth tellers at risk of being misclassified. There may be such a thing as too much cognitive load induced by certain techniques PMID:26379610
A reverse order interview does not aid deception detection regarding intentions.
Fenn, Elise; McGuire, Mollie; Langben, Sara; Blandón-Gitlin, Iris
2015-01-01
Promising recent research suggests that more cognitively demanding interviews improve deception detection accuracy. Would these cognitively demanding techniques work in the same way when discriminating between true and false future intentions? In Experiment 1 participants planned to complete a task, but instead were intercepted and interviewed about their intentions. Participants lied or told the truth, and were subjected to high (reverse order) or low (sequential order) cognitive load interviews. Third-party observers watched these interviews and indicated whether they thought the person was lying or telling the truth. Subjecting participants to a reverse compared to sequential interview increased the misidentification rate and the appearance of cognitive load in truth tellers. People lying about false intentions were not better identified. In Experiment 2, a second set of third-party observers rated behavioral cues. Consistent with Experiment 1, truth tellers, but not liars, exhibited more behaviors associated with lying and fewer behaviors associated with truth telling in the reverse than sequential interview. Together these results suggest that certain cognitively demanding interviews may be less useful when interviewing to detect false intentions. Explaining a true intention while under higher cognitive demand places truth tellers at risk of being misclassified. There may be such a thing as too much cognitive load induced by certain techniques.
NASA Technical Reports Server (NTRS)
Galen, T. J. (Inventor)
1986-01-01
A fluid sampler for collecting a plurality of discrete samples over separate time intervals is described. The sampler comprises a sample assembly having an inlet and a plurality of discreet sample tubes each of which has inlet and outlet sides. A multiport dual acting valve is provided in the sampler in order to sequentially pass air from the sample inlet into the selected sample tubes. The sample tubes extend longitudinally of the housing and are located about the outer periphery thereof so that upon removal of an enclosure cover, they are readily accessible for operation of the sampler in an analysis mode.
Pistón, Mariela; Mollo, Alicia; Knochen, Moisés
2011-01-01
A fast and efficient automated method using a sequential injection analysis (SIA) system, based on the Griess, reaction was developed for the determination of nitrate and nitrite in infant formulas and milk powder. The system enables to mix a measured amount of sample (previously constituted in the liquid form and deproteinized) with the chromogenic reagent to produce a colored substance whose absorbance was recorded. For nitrate determination, an on-line prereduction step was added by passing the sample through a Cd minicolumn. The system was controlled from a PC by means of a user-friendly program. Figures of merit include linearity (r2 > 0.999 for both analytes), limits of detection (0.32 mg kg−1 NO3-N, and 0.05 mg kg−1 NO2-N), and precision (sr%) 0.8–3.0. Results were statistically in good agreement with those obtained with the reference ISO-IDF method. The sampling frequency was 30 hour−1 (nitrate) and 80 hour−1 (nitrite) when performed separately. PMID:21960750
An apparatus for sequentially combining microvolumes of reagents by infrasonic mixing.
Camien, M N; Warner, R C
1984-05-01
A method employing high-speed infrasonic mixing for obtaining timed samples for following the progress of a moderately rapid chemical reaction is described. Drops of 10 to 50 microliter each of two reagents are mixed to initiate the reaction, followed, after a measured time interval, by mixing with a drop of a third reagent to quench the reaction. The method was developed for measuring the rate of denaturation of covalently closed, circular DNA in NaOH at several temperatures. For this purpose the timed samples were analyzed by analytical ultracentrifugation. The apparatus was tested by determination of the rate of hydrolysis of 2,4-dinitrophenyl acetate in an alkaline buffer. The important characteristics of the method are (i) it requires very small volumes of sample and reagents; (ii) the components of the reaction mixture are pre-equilibrated and mixed with no transfer outside the prescribed constant temperature environment; (iii) the mixing is very rapid; and (iv) satisfactorily precise measurements of relatively short time intervals (approximately 2 sec minimum) between sequential mixings of the components are readily obtainable.
Furukawa, Makoto; Takagai, Yoshitaka
2016-10-04
Online solid-phase extraction (SPE) coupled with inductively coupled plasma mass spectrometry (ICPMS) is a useful tool in automatic sequential analysis. However, it cannot simultaneously quantify the analytical targets and their recovery percentages (R%) in one-shot samples. We propose a system that simultaneously acquires both data in a single sample injection. The main flowline of the online solid-phase extraction is divided into main and split flows. The split flow line (i.e., bypass line), which circumvents the SPE column, was placed on the main flow line. Under program-controlled switching of the automatic valve, the ICPMS sequentially measures the targets in a sample before and after column preconcentration and determines the target concentrations and the R% on the SPE column. This paper describes the system development and two demonstrations to exhibit the analytical significance, i.e., the ultratrace amounts of radioactive strontium ( 90 Sr) using commercial Sr-trap resin and multielement adsorbability on the SPE column. This system is applicable to other flow analyses and detectors in online solid phase extraction.
NASA Astrophysics Data System (ADS)
Gonderman, S.; Tripathi, J. K.; Sizyuk, T.; Hassanein, A.
2017-08-01
Tungsten (W) has been selected as the divertor material in ITER based on its promising thermal and mechanical properties. Despite these advantages, continued investigation has revealed W to undergo extreme surface morphology evolution in response to relevant fusion operating conditions. These complications spur the need for further exploration of W and other innovative plasma facing components (PFCs) for future fusion devices. Recent literature has shown that alloying of W with other refractory metals, such as tantalum (Ta), results in the enhancement of key PFC properties including, but not limited to, ductility, hydrogen isotope retention, and helium ion (He+) radiation tolerance. In the present study, pure W and W-Ta alloys are exposed to simultaneous and sequential low energy, He+ and deuterium (D+) ion beam irradiations at high (1223 K) and low (523 K) temperatures. The goal of this study is to cultivate a complete understanding of the synergistic effects induced by dual and sequential ion irradiation on W and W-Ta alloy surface morphology evolution. For the dual ion beam experiments, W and W-Ta samples were subjected to four different He+: D+ ion ratios (100% He+, 60% D+ + 40% He+, 90% D+ + 10% He+ and 100% D+) having a total constant He+ fluence of 6 × 1024 ion m-2. The W and W-Ta samples both exhibit the expected damaged surfaces under the 100% He+ irradiation, but as the ratio of D+/He+ ions increases there is a clear suppression of the surface morphology at high temperatures. This observation is supported by the sequential experiments, which show a similar suppression of surface morphology when W and W-Ta samples are first exposed to low energy He+ irradiation and then exposed to subsequent low energy D+ irradiation at high temperatures. Interestingly, this morphology suppression is not observed at low temperatures, implying there is a D-W interaction mechanism which is dependent on temperature that is driving the suppression of the microstructure evolution in both the pure W and W-Ta alloys. Minor irradiation tolerance enhancement in the performance of the W-Ta samples is also observed.
Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao
2018-01-01
After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work–family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work–family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work–family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women’s perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work–family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work–family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress. PMID:29719522
Dave, Hreem; Phoenix, Vidya; Becker, Edmund R; Lambert, Scott R
2010-08-01
To compare the incidence of adverse events and visual outcomes and to compare the economic costs of sequential vs simultaneous bilateral cataract surgery for infants with congenital cataracts. Retrospective review of simultaneous vs sequential bilateral cataract surgery for infants with congenital cataracts who underwent cataract surgery when 6 months or younger at our institution. Records were available for 10 children who underwent sequential surgery at a mean age of 49 days for the first eye and 17 children who underwent simultaneous surgery at a mean age of 68 days (P = .25). We found a similar incidence of adverse events between the 2 treatment groups. Intraoperative or postoperative complications occurred in 14 eyes. The most common postoperative complication was glaucoma. No eyes developed endophthalmitis. The mean (SD) absolute interocular difference in logMAR visual acuities between the 2 treatment groups was 0.47 (0.76) for the sequential group and 0.44 (0.40) for the simultaneous group (P = .92). Payments for the hospital, drugs, supplies, and professional services were on average 21.9% lower per patient in the simultaneous group. Simultaneous bilateral cataract surgery for infants with congenital cataracts is associated with a 21.9% reduction in medical payments and no discernible difference in the incidence of adverse events or visual outcomes. However, our small sample size limits our ability to make meaningful comparisons of the relative risks and visual benefits of the 2 procedures.
Zhou, Shiyi; Da, Shu; Guo, Heng; Zhang, Xichao
2018-01-01
After the implementation of the universal two-child policy in 2016, more and more working women have found themselves caught in the dilemma of whether to raise a baby or be promoted, which exacerbates work-family conflicts among Chinese women. Few studies have examined the mediating effect of negative affect. The present study combined the conservation of resources model and affective events theory to examine the sequential mediating effect of negative affect and perceived stress in the relationship between work-family conflict and mental health. A valid sample of 351 full-time Chinese female employees was recruited in this study, and participants voluntarily answered online questionnaires. Pearson correlation analysis, structural equation modeling, and multiple mediation analysis were used to examine the relationships between work-family conflict, negative affect, perceived stress, and mental health in full-time female employees. We found that women's perceptions of both work-to-family conflict and family-to-work conflict were significant negatively related to mental health. Additionally, the results showed that negative affect and perceived stress were negatively correlated with mental health. The 95% confidence intervals indicated the sequential mediating effect of negative affect and stress in the relationship between work-family conflict and mental health was significant, which supported the hypothesized sequential mediation model. The findings suggest that work-family conflicts affected the level of self-reported mental health, and this relationship functioned through the two sequential mediators of negative affect and perceived stress.
Alternative sample sizes for verification dose experiments and dose audits
NASA Astrophysics Data System (ADS)
Taylor, W. A.; Hansen, J. M.
1999-01-01
ISO 11137 (1995), "Sterilization of Health Care Products—Requirements for Validation and Routine Control—Radiation Sterilization", provides sampling plans for performing initial verification dose experiments and quarterly dose audits. Alternative sampling plans are presented which provide equivalent protection. These sampling plans can significantly reduce the cost of testing. These alternative sampling plans have been included in a draft ISO Technical Report (type 2). This paper examines the rational behind the proposed alternative sampling plans. The protection provided by the current verification and audit sampling plans is first examined. Then methods for identifying equivalent plans are highlighted. Finally, methods for comparing the cost associated with the different plans are provided. This paper includes additional guidance for selecting between the original and alternative sampling plans not included in the technical report.
Spahr, N.E.; Boulger, R.W.
1997-01-01
Quality-control samples provide part of the information needed to estimate the bias and variability that result from sample collection, processing, and analysis. Quality-control samples of surface water collected for the Upper Colorado River National Water-Quality Assessment study unit for water years 1995?96 are presented and analyzed in this report. The types of quality-control samples collected include pre-processing split replicates, concurrent replicates, sequential replicates, post-processing split replicates, and field blanks. Analysis of the pre-processing split replicates, concurrent replicates, sequential replicates, and post-processing split replicates is based on differences between analytical results of the environmental samples and analytical results of the quality-control samples. Results of these comparisons indicate that variability introduced by sample collection, processing, and handling is low and will not affect interpretation of the environmental data. The differences for most water-quality constituents is on the order of plus or minus 1 or 2 lowest rounding units. A lowest rounding unit is equivalent to the magnitude of the least significant figure reported for analytical results. The use of lowest rounding units avoids some of the difficulty in comparing differences between pairs of samples when concentrations span orders of magnitude and provides a measure of the practical significance of the effect of variability. Analysis of field-blank quality-control samples indicates that with the exception of chloride and silica, no systematic contamination of samples is apparent. Chloride contamination probably was the result of incomplete rinsing of the dilute cleaning solution from the outlet ports of the decaport sample splitter. Silica contamination seems to have been introduced by the blank water. Sampling and processing procedures for water year 1997 have been modified as a result of these analyses.
Miyazaki, Masayoshi; Nishiyama, Kinji; Ueda, Yoshihiro; Ohira, Shingo; Tsujii, Katsutomo; Isono, Masaru; Masaoka, Akira; Teshima, Teruki
2016-07-01
The aim of this study was to compare three strategies for intensity-modulated radiotherapy (IMRT) for 20 head-and-neck cancer patients. For simultaneous integrated boost (SIB), doses were 66 and 54 Gy in 30 fractions for PTVboost and PTVelective, respectively. Two-phase IMRT delivered 50 Gy in 25 fractions to PTVelective in the First Plan, and 20 Gy in 10 fractions to PTVboost in the Second Plan. Sequential SIB (SEQ-SIB) delivered 55 Gy and 50 Gy in 25 fractions, respectively, to PTVboost and PTVelective using SIB in the First Plan and 11 Gy in 5 fractions to PTVboost in the Second Plan. Conformity indexes (CIs) (mean ± SD) for PTVboost and PTVelective were 1.09 ± 0.05 and 1.34 ± 0.12 for SIB, 1.39 ± 0.14 and 1.80 ± 0.28 for two-phase IMRT, and 1.14 ± 0.07 and 1.60 ± 0.18 for SEQ-SIB, respectively. CI was significantly highest for two-phase IMRT. Maximum doses (Dmax) to the spinal cord were 42.1 ± 1.5 Gy for SIB, 43.9 ± 1.0 Gy for two-phase IMRT and 40.3 ± 1.8 Gy for SEQ-SIB. Brainstem Dmax were 50.1 ± 2.2 Gy for SIB, 50.5 ± 4.6 Gy for two-phase IMRT and 47.4 ± 3.6 Gy for SEQ-SIB. Spinal cord Dmax for the three techniques was significantly different, and brainstem Dmax was significantly lower for SEQ-SIB. The compromised conformity of two-phase IMRT can result in higher doses to organs at risk (OARs). Lower OAR doses in SEQ-SIB made SEQ-SIB an alternative to SIB, which applies unconventional doses per fraction. © The Author 2016. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Practical characteristics of adaptive design in phase 2 and 3 clinical trials.
Sato, A; Shimura, M; Gosho, M
2018-04-01
Adaptive design methods are expected to be ethical, reflect real medical practice, increase the likelihood of research and development success and reduce the allocation of patients into ineffective treatment groups by the early termination of clinical trials. However, the comprehensive details regarding which types of clinical trials will include adaptive designs remain unclear. We examined the practical characteristics of adaptive design used in clinical trials. We conducted a literature search of adaptive design clinical trials published from 2012 to 2015 using PubMed, EMBASE, and the Cochrane Central Register of Controlled Trials, with common search terms related to adaptive design. We systematically assessed the types and characteristics of adaptive designs and disease areas employed in the adaptive design trials. Our survey identified 245 adaptive design clinical trials. The number of trials by the publication year increased from 2012 to 2013 and did not greatly change afterwards. The most frequently used adaptive design was group sequential design (n = 222, 90.6%), especially for neoplasm or cardiovascular disease trials. Among the other types of adaptive design, adaptive dose/treatment group selection (n = 21, 8.6%) and adaptive sample-size adjustment (n = 19, 7.8%) were frequently used. The adaptive randomization (n = 8, 3.3%) and adaptive seamless design (n = 6, 2.4%) were less frequent. Adaptive dose/treatment group selection and adaptive sample-size adjustment were frequently used (up to 23%) in "certain infectious and parasitic diseases," "diseases of nervous system," and "mental and behavioural disorders" in comparison with "neoplasms" (<6.6%). For "mental and behavioural disorders," adaptive randomization was used in two trials of eight trials in total (25%). Group sequential design and adaptive sample-size adjustment were used frequently in phase 3 trials or in trials where study phase was not specified, whereas the other types of adaptive designs were used more in phase 2 trials. Approximately 82% (202 of 245 trials) resulted in early termination at the interim analysis. Among the 202 trials, 132 (54% of 245 trials) had fewer randomized patients than initially planned. This result supports the motive to use adaptive design to make study durations shorter and include a smaller number of subjects. We found that adaptive designs have been applied to clinical trials in various therapeutic areas and interventions. The applications were frequently reported in neoplasm or cardiovascular clinical trials. The adaptive dose/treatment group selection and sample-size adjustment are increasingly common, and these adaptations generally follow the Food and Drug Administration's (FDA's) recommendations. © 2017 John Wiley & Sons Ltd.
Lertbutsayanukul, Chawalit; Prayongrat, Anussara; Kannarunimit, Danita; Chakkabat, Chakkapong; Netsawang, Buntipa; Kitpanit, Sarin
2018-05-01
This study was performed to compare the acute and late toxicities between sequential (SEQ) and simultaneous integrated boost (SIB) intensity-modulated radiotherapy (IMRT) in nasopharyngeal carcinoma (NPC). Stage I-IVB NPC patients were randomized to receive SEQ-IMRT or SIB-IMRT. SEQ-IMRT consisted of two plans: 2 Gy × 25 fractions to low-risk planning target volume (PTV) followed by a sequential boost (2 Gy × 10 fractions) to high-risk PTV, while SIB-IMRT treated low- and high-risk PTVs with doses of 56 and 70 Gy in 33 fractions. Toxicities and survival outcomes were analyzed. Between October 2010 and September 2015, of the 209 patients who completed treatment, 102 in the SEQ and 107 in the SIB arm were analyzed. The majority had undifferentiated squamous cell carcinoma (82%). Mucositis and dysphagia were the most common grade 3-5 acute toxicities. There were no statistically significant differences in the cumulative incidence of grade 3-4 acute toxicities between the two arms (59.8% in SEQ vs. 58.9% in SIB; P = 0.892). Common grade 3-4 late toxicities for SEQ and SIB included hearing loss (2.9 vs. 8.4%), temporal lobe injury (2.9 vs. 0.9%), cranial nerve injury (0 vs. 2.8%), and xerostomia (2 vs. 0.9%). With the median follow-up of 41 months, 3‑year progression-free and overall survival rates were 72.7 vs. 73.4% (P = 0.488) and 86.3 vs. 83.6% (P = 0.938), respectively. SEQ and SIB provide excellent survival outcomes with few late toxicities. According to our study, SIB with a satisfactory dose-volume constraint to nearby critical organs is the technique of choice for NPC treatment due to its convenience.
Improved minimum cost and maximum power two stage genome-wide association study designs.
Stanhope, Stephen A; Skol, Andrew D
2012-01-01
In a two stage genome-wide association study (2S-GWAS), a sample of cases and controls is allocated into two groups, and genetic markers are analyzed sequentially with respect to these groups. For such studies, experimental design considerations have primarily focused on minimizing study cost as a function of the allocation of cases and controls to stages, subject to a constraint on the power to detect an associated marker. However, most treatments of this problem implicitly restrict the set of feasible designs to only those that allocate the same proportions of cases and controls to each stage. In this paper, we demonstrate that removing this restriction can improve the cost advantages demonstrated by previous 2S-GWAS designs by up to 40%. Additionally, we consider designs that maximize study power with respect to a cost constraint, and show that recalculated power maximizing designs can recover a substantial amount of the planned study power that might otherwise be lost if study funding is reduced. We provide open source software for calculating cost minimizing or power maximizing 2S-GWAS designs.
Scheiber, Caroline
2017-09-01
This study explored whether the Kaufman Assessment Battery for Children-Second Edition (KABC-II) predicted academic achievement outcomes of the Kaufman Test of Educational Achievement-Second Edition (KTEA-II) equally well across a representative sample of African American, Hispanic, and Caucasian school-aged children ( N = 2,001) in three grade groups (1-4, 5-8, 9-12). It was of interest to study possible prediction bias in the slope and intercept of the five underlying Cattell-Horn-Carroll (CHC) cognitive factors of the KABC-II-Sequential/Gsm (Short-Term Memory), Learning/Glr (Long-Term Storage and Retrieval), Simultaneous/Gv (Visual Processing), Planning/Gf (Fluid Reasoning), and Knowledge/Gc (Crystallized Ability)-in estimating reading, writing, and math. Structural equation modeling techniques demonstrated a lack of bias in the slopes; however, four of the five CHC indexes showed a persistent overprediction of the minority groups' achievement in the intercept. The overprediction is likely attributable to institutional or societal contributions, which limit the students' ability to achieve to their fullest potential.
Strategic workforce planning for a multihospital, integrated delivery system.
Datz, David; Hallberg, Colleen; Harris, Kathy; Harrison, Lisa; Samples, Patience
2012-01-01
Banner Health has long recognized the need to anticipate, beyond the immediate operational realities or even the annual budgeting projection exercises, the necessary workforce needs of the future. Thus, in 2011, Banner implemented a workforce planning model that included structures, processes, and tools for predicting workforce needs, with particular focus on identified critical systemwide practice areas. The model represents the incorporation of labor management tools and processes with more strategic, broad-view, long-term assessment and planning mechanisms. The sequential tying of the workforce planning lifecycle with the organization's strategy and financial planning process supports alignment of goals, objectives, and resource allocation. Collaboration among strategy, finance, human resources, and operations has provided us with the ability to identify critical position groups based on 3-year strategic priorities. By engaging leaders from across the organization, focusing on activities at facility, regional, and system levels, and building in mechanisms for accountability, we are now engaged in continuous evaluations of our delivery models, the competencies and preparations necessary for the staff to effectively function within those delivery models, and developing and implementing action plans designed to ensure adequate numbers of the staff whose competencies will be suited to the work expected of them.
Supersonics/Airport Noise Plan: An Evolutionary Roadmap
NASA Technical Reports Server (NTRS)
Bridges, James
2011-01-01
This presentation discusses the Plan for the Airport Noise Tech Challenge Area of the Supersonics Project. It is given in the context of strategic planning exercises being done in other Projects to show the strategic aspects of the Airport Noise plan rather than detailed task lists. The essence of this strategic view is the decomposition of the research plan by Concept and by Tools. Tools (computational, experimental) is the description of the plan that resources (such as researchers) most readily identify with, while Concepts (here noise reduction technologies or aircraft configurations) is the aspects that project management and outside reviewers most appreciate as deliverables and milestones. By carefully cross-linking these so that Concepts are addressed sequentially (roughly one after another) by researchers developing/applying their Tools simultaneously (in parallel with one another), the researchers can deliver milestones at a reasonable pace while doing the longer-term development that most Tools in the aeroacoustics science require. An example of this simultaneous application of tools was given for the Concept of High Aspect Ratio Nozzles. The presentation concluded with a few ideas on how this strategic view could be applied to the Subsonic Fixed Wing Project's Quiet Aircraft Tech Challenge Area as it works through its current roadmapping exercise.
Passive Baited Sequential Fly Trap
USDA-ARS?s Scientific Manuscript database
Sampling fly populations associated with human populations is needed to understand diel behavior and to monitor population densities before and after control operations. Population control measures are dependent on the results of monitoring efforts as they may provide insight into the fly behavior ...
Comparison of chain sampling plans with single and double sampling plans
NASA Technical Reports Server (NTRS)
Stephens, K. S.; Dodge, H. F.
1976-01-01
The efficiency of chain sampling is examined through matching of operating characteristics (OC) curves of chain sampling plans (ChSP) with single and double sampling plans. In particular, the operating characteristics of some ChSP-0, 3 and 1, 3 as well as ChSP-0, 4 and 1, 4 are presented, where the number pairs represent the first and the second cumulative acceptance numbers. The fact that the ChSP procedure uses cumulative results from two or more samples and that the parameters can be varied to produce a wide variety of operating characteristics raises the question whether it may be possible for such plans to provide a given protection with less inspection than with single or double sampling plans. The operating ratio values reported illustrate the possibilities of matching single and double sampling plans with ChSP. It is shown that chain sampling plans provide improved efficiency over single and double sampling plans having substantially the same operating characteristics.
Simultaneous beam sampling and aperture shape optimization for SPORT.
Zarepisheh, Masoud; Li, Ruijiang; Ye, Yinyu; Xing, Lei
2015-02-01
Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.
Simultaneous beam sampling and aperture shape optimization for SPORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei, E-mail: Lei@stanford.edu
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decisionmore » variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and neck and a prostate case. It significantly improved the target conformality and at the same time critical structure sparing compared with conventional intensity modulated radiation therapy (IMRT). In the head and neck case, for example, the average PTV coverage D99% for two PTVs, cord and brainstem max doses, and right parotid gland mean dose were improved, respectively, by about 7%, 37%, 12%, and 16%. Conclusions: The proposed method automatically determines the number of the stations required to generate a satisfactory plan and optimizes simultaneously the involved station parameters, leading to improved quality of the resultant treatment plans as compared with the conventional IMRT plans.« less
Proposed hardware architectures of particle filter for object tracking
NASA Astrophysics Data System (ADS)
Abd El-Halym, Howida A.; Mahmoud, Imbaby Ismail; Habib, SED
2012-12-01
In this article, efficient hardware architectures for particle filter (PF) are presented. We propose three different architectures for Sequential Importance Resampling Filter (SIRF) implementation. The first architecture is a two-step sequential PF machine, where particle sampling, weight, and output calculations are carried out in parallel during the first step followed by sequential resampling in the second step. For the weight computation step, a piecewise linear function is used instead of the classical exponential function. This decreases the complexity of the architecture without degrading the results. The second architecture speeds up the resampling step via a parallel, rather than a serial, architecture. This second architecture targets a balance between hardware resources and the speed of operation. The third architecture implements the SIRF as a distributed PF composed of several processing elements and central unit. All the proposed architectures are captured using VHDL synthesized using Xilinx environment, and verified using the ModelSim simulator. Synthesis results confirmed the resource reduction and speed up advantages of our architectures.
[Sequential monitoring of renal transplant with aspiration cytology].
Manfro, R C; Gonçalves, L F; de Moura, L A
1998-01-01
To evaluate the utility of kidney aspiration cytology in the sequential monitorization of acute rejection in renal transplant patients. Thirty patients were submitted to 376 aspirations. The clinical diagnoses were independently established. The representativity of the samples reached 82.7%. The total corrected increment index and the number of immunoactivated cells were higher during acute rejection as compared to normal allograft function, acute tubular necrosis, and cyclosporine nephrotoxicity. The parameters to the diagnosis of acute rejection were sensitivity: 71.8%, specificity: 87.3%, positive predictive value: 50.9%, negative predictive value: 94.9% and accuracy 84.9%. The false positive results were mainly related to cytomegalovirus infection or to the administration of OKT3. In 10 out of 11 false negative results incipient immunoactivation was present alerting to the possibility of acute rejection. Kidney aspiration cytology is a useful tool for the sequential monitorization of acute rejection in renal transplant patients. The best results are reached when the results of aspiration cytology are analyzed with the clinical data.
Yu, Zhan; Li, Yuanyang; Liu, Lisheng; Guo, Jin; Wang, Tingfeng; Yang, Guoqing
2017-11-10
The speckle pattern (line by line) sequential extraction (SPSE) metric is proposed by the one-dimensional speckle intensity level crossing theory. Through the sequential extraction of received speckle information, the speckle metrics for estimating the variation of focusing spot size on a remote diffuse target are obtained. Based on the simulation, we will give some discussions about the SPSE metric range of application under the theoretical conditions, and the aperture size will affect the metric performance of the observation system. The results of the analyses are verified by the experiment. This method is applied to the detection of relative static target (speckled jitter frequency is less than the CCD sampling frequency). The SPSE metric can determine the variation of the focusing spot size over a long distance, moreover, the metric will estimate the spot size under some conditions. Therefore, the monitoring and the feedback of far-field spot will be implemented laser focusing system applications and help the system to optimize the focusing performance.
Zou, Wei; Sissons, Mike; Gidley, Michael J; Gilbert, Robert G; Warren, Frederick J
2015-12-01
The aim of the present study is to characterise the influence of gluten structure on the kinetics of starch hydrolysis in pasta. Spaghetti and powdered pasta were prepared from three different cultivars of durum semolina, and starch was also purified from each cultivar. Digestion kinetic parameters were obtained through logarithm-of-slope analysis, allowing identification of sequential digestion steps. Purified starch and semolina were digested following a single first-order rate constant, while pasta and powdered pasta followed two sequential first-order rate constants. Rate coefficients were altered by pepsin hydrolysis. Confocal microscopy revealed that, following cooking, starch granules were completely swollen for starch, semolina and pasta powder samples. In pasta, they were completely swollen in the external regions, partially swollen in the intermediate region and almost intact in the pasta strand centre. Gluten entrapment accounts for sequential kinetic steps in starch digestion of pasta; the compact microstructure of pasta also reduces digestion rates. Copyright © 2015 Elsevier Ltd. All rights reserved.
Lopes, G; Costa, E T S; Penido, E S; Sparks, D L; Guilherme, L R G
2015-09-01
Mining and smelting activities are potential sources of heavy metal contamination, which pose a threat to human health and ecological systems. This study investigated single and sequential extractions of Zn, Pb, and Cd in Brazilian soils affected by mining and smelting activities. Soils from a Zn mining area (soils A, B, C, D, E, and the control soil) and a tailing from a smelting area were collected in Minas Gerais state, Brazil. The samples were subjected to single (using Mehlich I solution) and sequential extractions. The risk assessment code (RAC), the redistribution index (U ts ), and the reduced partition index (I R ) have been applied to the sequential extraction data. Zinc and Cd, in soil samples from the mining area, were found mainly associated with carbonate forms. This same pattern did not occur for Pb. Moreover, the Fe-Mn oxides and residual fractions had important contributions for Zn and Pb in those soils. For the tailing, more than 70 % of Zn and Cd were released in the exchangeable fraction, showing a much higher mobility and availability of these metals at this site, which was also supported by results of RAC and I R . These differences in terms of mobility might be due to different chemical forms of the metals in the two sites, which are attributable to natural occurrence as well as ore processing.
Santos, A J G; Mazzilli, B P; Fávaro, D I T; Silva, P S C
2006-01-01
Phosphogypsum is a waste produced by the phosphate fertilizer industry. Although phosphogypsum is mainly calcium sulphate dihydrate, it contains elevated levels of impurities, which originate from the source phosphate rock used in the phosphoric acid production. Among these impurities, radionuclides from 238U and 232Th decay series are of most concern due to their radiotoxicity. Other elements, such as rare earth elements (REE) and Ba are also enriched in the phosphogypsum. The bioavailability of radionuclides (226Ra, 210Pb and 232Th), rare earth elements and Ba to the surrounding aquatic system was evaluated by the application of sequential leaching of the phosphogypsum samples from the Brazilian phosphoric acid producers. The sequential extraction results show that most of the radium and lead are located in the "iron oxide" (non-CaSO4) fraction, and that only 13-18% of these radionuclides are distributed in the most labile fraction. Th, REE and Ba were found predominantly in the residual phase, which corresponds to a small fraction of the phosphate rock or monazite that did not react and to insoluble compounds such as sulphates, phosphates and silicates. It can be concluded that although all these elements are enriched in the phosphogypsum samples they are not associated with CaSO4 itself and therefore do not represent a threat to the surrounding aquatic environment.
Lee, Chang-Hyung; Derby, Richard; Choi, Hyun-Seok; Lee, Sang-Heon; Kim, Se Hoon; Kang, Yoon Kyu
2010-01-01
One technique in radiofrequency neurotomies uses 2 electrodes that are simultaneously placed to lie parallel to one another. Comparing lesions on cadaveric interspinous ligament tissue and measuring the temperature change in egg white allows us to accurately measure quantitatively the area of the lesion. Fresh cadaver spinal tissue and egg white tissue were used. A series of samples were prepared with the electrodes placed 1 to 7 mm apart. Using radiofrequency, the needle electrodes were heated in sequential or simultaneous order and the distance of the escaped lesion area and temperature were measured. Samples of cadaver interspinous ligament showed sequential heating of the needles limits the placement of the needle electrodes up to 2 mm apart from each other and up to 4 mm apart when heated simultaneously. The temperature at the escaped lesion area decreased according to the distance for egg white. There was a significant difference in temperature at the escaped lesion area up to 6 mm apart and the temperature was above 50 degrees celsius up to 5 mm in simultaneous lesion and 3 mm in the sequential lesion. The limitations of this study include cadaveric experimentation and use of intraspinous ligament rather than medial branch of the dorsal ramus which is difficult to identify. Heating the 2 electrodes simultaneously appears to coagulate a wider area and potentially produce better results in less time.
Villa, Francesco
1982-01-01
Method and apparatus for sequentially scanning a plurality of target elements with an electron scanning beam modulated in accordance with variations in a high-frequency analog signal to provide discrete analog signal samples representative of successive portions of the analog signal; coupling the discrete analog signal samples from each of the target elements to a different one of a plurality of high speed storage devices; converting the discrete analog signal samples to equivalent digital signals; and storing the digital signals in a digital memory unit for subsequent measurement or display.
Garey, Lorra; Cheema, Mina K; Otal, Tanveer K; Schmidt, Norman B; Neighbors, Clayton; Zvolensky, Michael J
2016-10-01
Smoking rates are markedly higher among trauma-exposed individuals relative to non-trauma-exposed individuals. Extant work suggests that both perceived stress and negative affect reduction smoking expectancies are independent mechanisms that link trauma-related symptoms and smoking. Yet, no work has examined perceived stress and negative affect reduction smoking expectancies as potential explanatory variables for the relation between trauma-related symptom severity and smoking in a sequential pathway model. Methods The present study utilized a sample of treatment-seeking, trauma-exposed smokers (n = 363; 49.0% female) to examine perceived stress and negative affect reduction expectancies for smoking as potential sequential explanatory variables linking trauma-related symptom severity and nicotine dependence, perceived barriers to smoking cessation, and severity of withdrawal-related problems and symptoms during past quit attempts. As hypothesized, perceived stress and negative affect reduction expectancies had a significant sequential indirect effect on trauma-related symptom severity and criterion variables. Findings further elucidate the complex pathways through which trauma-related symptoms contribute to smoking behavior and cognitions, and highlight the importance of addressing perceived stress and negative affect reduction expectancies in smoking cessation programs among trauma-exposed individuals. (Am J Addict 2016;25:565-572). © 2016 American Academy of Addiction Psychiatry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akyildiz, Halil I.; Jur, Jesse S., E-mail: jsjur@ncsu.edu
2015-03-15
The effect of exposure conditions and surface area on hybrid material formation during sequential vapor infiltrations of trimethylaluminum (TMA) into polyamide 6 (PA6) and polyethylene terephthalate (PET) fibers is investigated. Mass gain of the fabric samples after infiltration was examined to elucidate the reaction extent with increasing number of sequential TMA single exposures, defined as the times for a TMA dose and a hold period. An interdependent relationship between dosing time and holding time on the hybrid material formation is observed for TMA exposure PET, exhibited as a linear trend between the mass gain and total exposure (dose time ×more » hold time × number of sequential exposures). Deviation from this linear relationship is only observed under very long dose or hold times. In comparison, amount of hybrid material formed during sequential exposures to PA6 fibers is found to be highly dependent on amount of TMA dosed. Increasing the surface area of the fiber by altering its cross-sectional dimension is shown to have little on the reaction behavior but does allow for improved diffusion of the TMA into the fiber. This work allows for the projection of exposure parameters necessary for future high-throughput hybrid modifications to polymer materials.« less
Automated Discovery and Modeling of Sequential Patterns Preceding Events of Interest
NASA Technical Reports Server (NTRS)
Rohloff, Kurt
2010-01-01
The integration of emerging data manipulation technologies has enabled a paradigm shift in practitioners' abilities to understand and anticipate events of interest in complex systems. Example events of interest include outbreaks of socio-political violence in nation-states. Rather than relying on human-centric modeling efforts that are limited by the availability of SMEs, automated data processing technologies has enabled the development of innovative automated complex system modeling and predictive analysis technologies. We introduce one such emerging modeling technology - the sequential pattern methodology. We have applied the sequential pattern methodology to automatically identify patterns of observed behavior that precede outbreaks of socio-political violence such as riots, rebellions and coups in nation-states. The sequential pattern methodology is a groundbreaking approach to automated complex system model discovery because it generates easily interpretable patterns based on direct observations of sampled factor data for a deeper understanding of societal behaviors that is tolerant of observation noise and missing data. The discovered patterns are simple to interpret and mimic human's identifications of observed trends in temporal data. Discovered patterns also provide an automated forecasting ability: we discuss an example of using discovered patterns coupled with a rich data environment to forecast various types of socio-political violence in nation-states.
Everyday robotic action: lessons from human action control
de Kleijn, Roy; Kachergis, George; Hommel, Bernhard
2014-01-01
Robots are increasingly capable of performing everyday human activities such as cooking, cleaning, and doing the laundry. This requires the real-time planning and execution of complex, temporally extended sequential actions under high degrees of uncertainty, which provides many challenges to traditional approaches to robot action control. We argue that important lessons in this respect can be learned from research on human action control. We provide a brief overview of available psychological insights into this issue and focus on four principles that we think could be particularly beneficial for robot control: the integration of symbolic and subsymbolic planning of action sequences, the integration of feedforward and feedback control, the clustering of complex actions into subcomponents, and the contextualization of action-control structures through goal representations. PMID:24672474
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5
Code of Federal Regulations, 2010 CFR
2010-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM-2.5
Code of Federal Regulations, 2011 CFR
2011-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
40 CFR 53.35 - Test procedure for Class II and Class III methods for PM2.5 and PM−2.5.
Code of Federal Regulations, 2012 CFR
2012-07-01
... reference method samplers shall be of single-filter design (not multi-filter, sequential sample design... and multiplicative bias (comparative slope and intercept). (1) For each test site, calculate the mean...
Code of Federal Regulations, 2010 CFR
2010-07-01
... substantial deviations from the design specifications of the sampler specified for reference methods in... general requirements as an ISO 9001-registered facility for the design and manufacture of designated... capable of automatically collecting a series of sequential samples. NO means nitrogen oxide. NO 2 means...
Hirakawa, Koji; Katayama, Masaaki; Soh, Nobuaki; Nakano, Koji; Imato, Toshihiko
2006-01-01
A rapid and sensitive immunoassay for the determination of vitellogenin (Vg) is described. The method involves a sequential injection analysis (SIA) system equipped with an amperometric detector and a neodymium magnet. Magnetic beads, onto which an antigen (Vg) was immobilized, were used as a solid support in an immunoassay. The introduction, trapping and release of magnetic beads in an immunoreaction cell were controlled by means of the neodymium magnet and by adjusting the flow of the carrier solution. The immunoassay was based on an indirect competitive immunoreaction of an alkaline phosphatase (ALP) labeled anti-Vg monoclonal antibody between the fraction of Vg immobilized on the magnetic beads and Vg in the sample solution. The immobilization of Vg on the beads involved coupling an amino group moiety of Vg with the magnetic beads after activation of a carboxylate moiety on the surface of magnetic beads that had been coated with a polylactate film. The Vg-immobilized magnetic beads were introduced and trapped in the immunoreaction cell equipped with the neodymium magnet; a Vg sample solution containing an ALP labeled anti-Vg antibody at a constant concentration and a p-aminophenyl phosphate (PAPP) solution were sequentially introduced into the immunoreaction cell. The product of the enzyme reaction of PAPP with ALP on the antibody, paminophenol, was transported to an amperometric detector, the applied voltage of which was set at +0.2 V vs. an Ag/AgCl reference electrode. A sigmoid calibration curve was obtained when the logarithm of the concentration of Vg was plotted against the peak current of the amperometric detector using various concentrations of standard Vg sample solutions (0-500 ppb). The time required for the analysis is less than 15 min.
Néri-Quiroz, José; Canto, Fabrice; Guillerme, Laurent; Couston, Laurent; Magnaldo, Alastair; Dugas, Vincent
2016-10-01
A miniaturized and automated approach for the determination of free acidity in solutions containing uranium (VI) is presented. The measurement technique is based on the concept of sequential injection analysis with on-line spectroscopic detection. The proposed methodology relies on the complexation and alkalimetric titration of nitric acid using a pH 5.6 sodium oxalate solution. The titration process is followed by UV/VIS detection at 650nm thanks to addition of Congo red as universal pH indicator. Mixing sequence as well as method validity was investigated by numerical simulation. This new analytical design allows fast (2.3min), reliable and accurate free acidity determination of low volume samples (10µL) containing uranium/[H(+)] moles ratio of 1:3 with relative standard deviation of <7.0% (n=11). The linearity range of the free nitric acid measurement is excellent up to 2.77molL(-1) with a correlation coefficient (R(2)) of 0.995. The method is specific, presence of actinide ions up to 0.54molL(-1) does not interfere on the determination of free nitric acid. In addition to automation, the developed sequential injection analysis method greatly improves the standard off-line oxalate complexation and alkalimetric titration method by reducing thousand fold the required sample volume, forty times the nuclear waste per analysis as well as the analysis time by eight fold. These analytical parameters are important especially in nuclear-related applications to improve laboratory safety, personnel exposure to radioactive samples and to drastically reduce environmental impacts or analytical radioactive waste. Copyright © 2016 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Cernat, Alexandru; Lynn, Peter
2018-01-01
This article is concerned with the extent to which the propensity to participate in a web face-to-face sequential mixed-mode survey is influenced by the ability to communicate with sample members by e-mail in addition to mail. Researchers may be able to collect e-mail addresses for sample members and to use them subsequently to send survey…
Array-based photoacoustic spectroscopy
Autrey, S. Thomas; Posakony, Gerald J.; Chen, Yu
2005-03-22
Methods and apparatus for simultaneous or sequential, rapid analysis of multiple samples by photoacoustic spectroscopy are disclosed. A photoacoustic spectroscopy sample array including a body having at least three recesses or affinity masses connected thereto is used in conjunction with a photoacoustic spectroscopy system. At least one acoustic detector is positioned near the recesses or affinity masses for detection of acoustic waves emitted from species of interest within the recesses or affinity masses.
Metabolomic analysis of urine samples by UHPLC-QTOF-MS: Impact of normalization strategies.
Gagnebin, Yoric; Tonoli, David; Lescuyer, Pierre; Ponte, Belen; de Seigneux, Sophie; Martin, Pierre-Yves; Schappler, Julie; Boccard, Julien; Rudaz, Serge
2017-02-22
Among the various biological matrices used in metabolomics, urine is a biofluid of major interest because of its non-invasive collection and its availability in large quantities. However, significant sources of variability in urine metabolomics based on UHPLC-MS are related to the analytical drift and variation of the sample concentration, thus requiring normalization. A sequential normalization strategy was developed to remove these detrimental effects, including: (i) pre-acquisition sample normalization by individual dilution factors to narrow the concentration range and to standardize the analytical conditions, (ii) post-acquisition data normalization by quality control-based robust LOESS signal correction (QC-RLSC) to correct for potential analytical drift, and (iii) post-acquisition data normalization by MS total useful signal (MSTUS) or probabilistic quotient normalization (PQN) to prevent the impact of concentration variability. This generic strategy was performed with urine samples from healthy individuals and was further implemented in the context of a clinical study to detect alterations in urine metabolomic profiles due to kidney failure. In the case of kidney failure, the relation between creatinine/osmolality and the sample concentration is modified, and relying only on these measurements for normalization could be highly detrimental. The sequential normalization strategy was demonstrated to significantly improve patient stratification by decreasing the unwanted variability and thus enhancing data quality. Copyright © 2016 Elsevier B.V. All rights reserved.
Capote, F Priego; Jiménez, J Ruiz; de Castro, M D Luque
2007-08-01
An analytical method for the sequential detection, identification and quantitation of extra virgin olive oil adulteration with four edible vegetable oils--sunflower, corn, peanut and coconut oils--is proposed. The only data required for this method are the results obtained from an analysis of the lipid fraction by gas chromatography-mass spectrometry. A total number of 566 samples (pure oils and samples of adulterated olive oil) were used to develop the chemometric models, which were designed to accomplish, step-by-step, the three aims of the method: to detect whether an olive oil sample is adulterated, to identify the type of adulterant used in the fraud, and to determine how much aldulterant is in the sample. Qualitative analysis was carried out via two chemometric approaches--soft independent modelling of class analogy (SIMCA) and K nearest neighbours (KNN)--both approaches exhibited prediction abilities that were always higher than 91% for adulterant detection and 88% for type of adulterant identification. Quantitative analysis was based on partial least squares regression (PLSR), which yielded R2 values of >0.90 for calibration and validation sets and thus made it possible to determine adulteration with excellent precision according to the Shenk criteria.
Carry-over of thermophilic Campylobacter spp. between sequential and adjacent poultry flocks.
Alter, Thomas; Weber, Rita Margarete; Hamedy, Ahmad; Glünder, Gerhard
2011-01-10
Nineteen flocks of four poultry species were monitored at a veterinary field station to investigate the distribution and spread of Campylobacter genotypes between sequential and adjacent flocks. Caecal and liver samples were obtained at frequent intervals from birds of all flocks and examined for Campylobacter. Amplified fragment length polymorphism (AFLP) analysis was performed to genotype Campylobacter isolates. Of the 1643 caecal and liver samples investigated, 452 (27.5%) caecal samples and 11 (0.7%) liver samples contained Campylobacter. Of the caecal isolates 76.3% were identified as Campylobacter jejuni and 23.7% were identified as Campylobacter coli. Poultry flocks were largely colonized by more than one AFLP type and an intense exchange of Campylobacter genotypes between different poultry flocks occurred. These findings indicate that multiple genotypes can constitute the Campylobacter population within single poultry flocks, hinting to different sources of exposure and/or genetic drifts within the Campylobacter population. Nevertheless, in most flocks single Campylobacter genotypes predominated. Some strains superseded others resulting in colonization by successive Campylobacter genotypes during the observation period. In conclusion, the data demonstrate that the large genetic diversity of Campylobacter must be considered in epidemiological evaluations and microbial risk assessments of Campylobacter in poultry. Copyright © 2010 Elsevier B.V. All rights reserved.
Ng, Ding-Quan; Lin, Yi-Pin
2016-01-01
In this pilot study, a modified sampling protocol was evaluated for the detection of lead contamination and locating the source of lead release in a simulated premise plumbing system with one-, three- and seven-day stagnation for a total period of 475 days. Copper pipes, stainless steel taps and brass fittings were used to assemble the “lead-free” system. Sequential sampling using 100 mL was used to detect lead contamination while that using 50 mL was used to locate the lead source. Elevated lead levels, far exceeding the World Health Organization (WHO) guideline value of 10 µg·L−1, persisted for as long as five months in the system. “Lead-free” brass fittings were identified as the source of lead contamination. Physical disturbances, such as renovation works, could cause short-term spikes in lead release. Orthophosphate was able to suppress total lead levels below 10 µg·L−1, but caused “blue water” problems. When orthophosphate addition was ceased, total lead levels began to spike within one week, implying that a continuous supply of orthophosphate was required to control total lead levels. Occasional total lead spikes were observed in one-day stagnation samples throughout the course of the experiments. PMID:26927154
Ng, Ding-Quan; Lin, Yi-Pin
2016-02-27
In this pilot study, a modified sampling protocol was evaluated for the detection of lead contamination and locating the source of lead release in a simulated premise plumbing system with one-, three- and seven-day stagnation for a total period of 475 days. Copper pipes, stainless steel taps and brass fittings were used to assemble the "lead-free" system. Sequential sampling using 100 mL was used to detect lead contamination while that using 50 mL was used to locate the lead source. Elevated lead levels, far exceeding the World Health Organization (WHO) guideline value of 10 µg · L(-1), persisted for as long as five months in the system. "Lead-free" brass fittings were identified as the source of lead contamination. Physical disturbances, such as renovation works, could cause short-term spikes in lead release. Orthophosphate was able to suppress total lead levels below 10 µg · L(-1), but caused "blue water" problems. When orthophosphate addition was ceased, total lead levels began to spike within one week, implying that a continuous supply of orthophosphate was required to control total lead levels. Occasional total lead spikes were observed in one-day stagnation samples throughout the course of the experiments.
Sequential sampling of visual objects during sustained attention.
Jia, Jianrong; Liu, Ling; Fang, Fang; Luo, Huan
2017-06-01
In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG) and a temporal response function (TRF) approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz) activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest that selective attention, in addition to the classically posited attentional "focus," involves a dynamic mechanism for monitoring all objects outside of the focus. Our findings also suggest that attention implements a space (object)-to-time transformation by acting as a series of concatenating attentional chunks that operate on 1 object at a time.
Sequential sampling of visual objects during sustained attention
Jia, Jianrong; Liu, Ling; Fang, Fang
2017-01-01
In a crowded visual scene, attention must be distributed efficiently and flexibly over time and space to accommodate different contexts. It is well established that selective attention enhances the corresponding neural responses, presumably implying that attention would persistently dwell on the task-relevant item. Meanwhile, recent studies, mostly in divided attentional contexts, suggest that attention does not remain stationary but samples objects alternately over time, suggesting a rhythmic view of attention. However, it remains unknown whether the dynamic mechanism essentially mediates attentional processes at a general level. Importantly, there is also a complete lack of direct neural evidence reflecting whether and how the brain rhythmically samples multiple visual objects during stimulus processing. To address these issues, in this study, we employed electroencephalography (EEG) and a temporal response function (TRF) approach, which can dissociate responses that exclusively represent a single object from the overall neuronal activity, to examine the spatiotemporal characteristics of attention in various attentional contexts. First, attention, which is characterized by inhibitory alpha-band (approximately 10 Hz) activity in TRFs, switches between attended and unattended objects every approximately 200 ms, suggesting a sequential sampling even when attention is required to mostly stay on the attended object. Second, the attentional spatiotemporal pattern is modulated by the task context, such that alpha-mediated switching becomes increasingly prominent as the task requires a more uniform distribution of attention. Finally, the switching pattern correlates with attentional behavioral performance. Our work provides direct neural evidence supporting a generally central role of temporal organization mechanism in attention, such that multiple objects are sequentially sorted according to their priority in attentional contexts. The results suggest that selective attention, in addition to the classically posited attentional “focus,” involves a dynamic mechanism for monitoring all objects outside of the focus. Our findings also suggest that attention implements a space (object)-to-time transformation by acting as a series of concatenating attentional chunks that operate on 1 object at a time. PMID:28658261
Heat accumulation during sequential cortical bone drilling.
Palmisano, Andrew C; Tai, Bruce L; Belmont, Barry; Irwin, Todd A; Shih, Albert; Holmes, James R
2016-03-01
Significant research exists regarding heat production during single-hole bone drilling. No published data exist regarding repetitive sequential drilling. This study elucidates the phenomenon of heat accumulation for sequential drilling with both Kirschner wires (K wires) and standard two-flute twist drills. It was hypothesized that cumulative heat would result in a higher temperature with each subsequent drill pass. Nine holes in a 3 × 3 array were drilled sequentially on moistened cadaveric tibia bone kept at body temperature (about 37 °C). Four thermocouples were placed at the center of four adjacent holes and 2 mm below the surface. A battery-driven hand drill guided by a servo-controlled motion system was used. Six samples were drilled with each tool (2.0 mm K wire and 2.0 and 2.5 mm standard drills). K wire drilling increased temperature from 5 °C at the first hole to 20 °C at holes 6 through 9. A similar trend was found in standard drills with less significant increments. The maximum temperatures of both tools increased from <0.5 °C to nearly 13 °C. The difference between drill sizes was found to be insignificant (P > 0.05). In conclusion, heat accumulated during sequential drilling, with size difference being insignificant. K wire produced more heat than its twist-drill counterparts. This study has demonstrated the heat accumulation phenomenon and its significant effect on temperature. Maximizing the drilling field and reducing the number of drill passes may decrease bone injury. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Yoda, Satoshi; Lin, Jessica J; Lawrence, Michael S; Burke, Benjamin J; Friboulet, Luc; Langenbucher, Adam; Dardaei, Leila; Prutisto-Chang, Kylie; Dagogo-Jack, Ibiayi; Timofeevski, Sergei; Hubbeling, Harper; Gainor, Justin F; Ferris, Lorin A; Riley, Amanda K; Kattermann, Krystina E; Timonina, Daria; Heist, Rebecca S; Iafrate, A John; Benes, Cyril H; Lennerz, Jochen K; Mino-Kenudson, Mari; Engelman, Jeffrey A; Johnson, Ted W; Hata, Aaron N; Shaw, Alice T
2018-06-01
The cornerstone of treatment for advanced ALK-positive lung cancer is sequential therapy with increasingly potent and selective ALK inhibitors. The third-generation ALK inhibitor lorlatinib has demonstrated clinical activity in patients who failed previous ALK inhibitors. To define the spectrum of ALK mutations that confer lorlatinib resistance, we performed accelerated mutagenesis screening of Ba/F3 cells expressing EML4-ALK. Under comparable conditions, N -ethyl- N -nitrosourea (ENU) mutagenesis generated numerous crizotinib-resistant but no lorlatinib-resistant clones harboring single ALK mutations. In similar screens with EML4-ALK containing single ALK resistance mutations, numerous lorlatinib-resistant clones emerged harboring compound ALK mutations. To determine the clinical relevance of these mutations, we analyzed repeat biopsies from lorlatinib-resistant patients. Seven of 20 samples (35%) harbored compound ALK mutations, including two identified in the ENU screen. Whole-exome sequencing in three cases confirmed the stepwise accumulation of ALK mutations during sequential treatment. These results suggest that sequential ALK inhibitors can foster the emergence of compound ALK mutations, identification of which is critical to informing drug design and developing effective therapeutic strategies. Significance: Treatment with sequential first-, second-, and third-generation ALK inhibitors can select for compound ALK mutations that confer high-level resistance to ALK-targeted therapies. A more efficacious long-term strategy may be up-front treatment with a third-generation ALK inhibitor to prevent the emergence of on-target resistance. Cancer Discov; 8(6); 714-29. ©2018 AACR. This article is highlighted in the In This Issue feature, p. 663 . ©2018 American Association for Cancer Research.
Trial Sequential Analysis in systematic reviews with meta-analysis.
Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian
2017-03-06
Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.
40 CFR 141.802 - Coliform sampling plan.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 23 2011-07-01 2011-07-01 false Coliform sampling plan. 141.802... sampling plan. (a) Each air carrier under this subpart must develop a coliform sampling plan covering each... required actions, including repeat and follow-up sampling, corrective action, and notification of...
40 CFR 141.802 - Coliform sampling plan.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Coliform sampling plan. 141.802... sampling plan. (a) Each air carrier under this subpart must develop a coliform sampling plan covering each... required actions, including repeat and follow-up sampling, corrective action, and notification of...
Lambert, Amaury; Alexander, Helen K; Stadler, Tanja
2014-07-07
The reconstruction of phylogenetic trees based on viral genetic sequence data sequentially sampled from an epidemic provides estimates of the past transmission dynamics, by fitting epidemiological models to these trees. To our knowledge, none of the epidemiological models currently used in phylogenetics can account for recovery rates and sampling rates dependent on the time elapsed since transmission, i.e. age of infection. Here we introduce an epidemiological model where infectives leave the epidemic, by either recovery or sampling, after some random time which may follow an arbitrary distribution. We derive an expression for the likelihood of the phylogenetic tree of sampled infectives under our general epidemiological model. The analytic concept developed in this paper will facilitate inference of past epidemiological dynamics and provide an analytical framework for performing very efficient simulations of phylogenetic trees under our model. The main idea of our analytic study is that the non-Markovian epidemiological model giving rise to phylogenetic trees growing vertically as time goes by can be represented by a Markovian "coalescent point process" growing horizontally by the sequential addition of pairs of coalescence and sampling times. As examples, we discuss two special cases of our general model, described in terms of influenza and HIV epidemics. Though phrased in epidemiological terms, our framework can also be used for instance to fit macroevolutionary models to phylogenies of extant and extinct species, accounting for general species lifetime distributions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Remediation of DNAPL through Sequential In Situ Chemical Oxidation and Bioaugmentation
2009-04-01
Specific Electrode Field Field-filtered, ICP - PSC 0.05 mg/L 125 mL plastic nitric acid to pHɚ 28 days cool to 4oC Ion Chromatography 25310 C PSC 0.2...oxidized by MnO2 at a significant rate; however, MnO2 reacted rapidly with oxalic acid ; • Complete dechlorination occurred only in microcosms...controller PLFA phospholipid fatty acid ppb parts per billion PTA pilot test area PVC polyvinyl chloride QAPP quality assurance project plan QA
A common mechanism underlies changes of mind about decisions and confidence
van den Berg, Ronald; Anandalingam, Kavitha; Zylberberg, Ariel; Kiani, Roozbeh; Shadlen, Michael N; Wolpert, Daniel M
2016-01-01
Decisions are accompanied by a degree of confidence that a selected option is correct. A sequential sampling framework explains the speed and accuracy of decisions and extends naturally to the confidence that the decision rendered is likely to be correct. However, discrepancies between confidence and accuracy suggest that confidence might be supported by mechanisms dissociated from the decision process. Here we show that this discrepancy can arise naturally because of simple processing delays. When participants were asked to report choice and confidence simultaneously, their confidence, reaction time and a perceptual decision about motion were explained by bounded evidence accumulation. However, we also observed revisions of the initial choice and/or confidence. These changes of mind were explained by a continuation of the mechanism that led to the initial choice. Our findings extend the sequential sampling framework to vacillation about confidence and invites caution in interpreting dissociations between confidence and accuracy. DOI: http://dx.doi.org/10.7554/eLife.12192.001 PMID:26829590
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murray, K.S.; Cauvet, D.; Lybeer, M.
1999-04-01
Anthropogenic activities related to 100 years of industrialization in the metropolitan Detroit area have significantly enriched the bed sediment of the lower reaches of the Rouge River in Cr, Cu, Fe, Ni, Pb, and Zn. These enriched elements, which may represent a threat to biota, are predominantly present in sequentially extracted reducible and oxidizable chemical phases with small contributions from residual phases. In size-fractionated samples trace metal concentrations generally increase with decreasing particle size, with the greatest contribution to this increase from the oxidizable phase. Experimental results obtained on replicate samples of river sediment demonstrate that the accuracy of themore » sequential extraction procedure, evaluated by comparing the sums of the three individual fractions, is generally better than 10%. Oxidizable and reducible phases therefore constitute important sources of potentially available heavy metals that need to be explicitly considered when evaluating sediment and water quality impacts on biota.« less
Schintu, Marco; Marrucci, Alessandro; Marras, Barbara; Galgani, Francois; Buosi, Carla; Ibba, Angelo; Cherchi, Antonietta
2016-10-15
Superficial sediments were taken at the port of Cagliari (Sardinia, Italy), which includes the oil terminal of one of the largest oil refineries in the Mediterranean. Significant trace metal concentrations were found in the whole port area. Sequential extraction of metals from the different sediment fractions (BCR method) showed a higher risk of remobilisation for Cd, which is mostly bound to the exchangeable fraction. Foraminiferal density and richness of species were variable across the study area. The living assemblages were characterized by low diversity in samples collected close to the port areas. Ammonia tepida and bolivinids, which were positively correlated with concentrations of heavy metals and organic matter content, appeared to show tolerance to the environmental disturbance. The sampling sites characterized by the highest values of biotic indices were located far from the port areas and present an epiphytic and epifaunal biocoenosis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui
2016-06-01
Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.
NASA Astrophysics Data System (ADS)
Bilionis, I.; Koutsourelakis, P. S.
2012-05-01
The present paper proposes an adaptive biasing potential technique for the computation of free energy landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy function, under the same objective of minimizing the Kullback-Leibler divergence between appropriately selected densities. It offers rigorous convergence diagnostics even though history dependent, non-Markovian dynamics are employed. It makes use of a greedy optimization scheme in order to obtain sparse representations of the free energy function which can be particularly useful in multidimensional cases. It employs embarrassingly parallelizable sampling schemes that are based on adaptive Sequential Monte Carlo and can be readily coupled with legacy molecular dynamics simulators. The sequential nature of the learning and sampling scheme enables the efficient calculation of free energy functions parametrized by the temperature. The characteristics and capabilities of the proposed method are demonstrated in three numerical examples.
Thaithet, Sujitra; Kradtap Hartwell, Supaporn; Lapanantnoppakhun, Somchai
2017-01-01
A low-pressure separation procedure of α-tocopherol and γ-oryzanol was developed based on a sequential injection chromatography (SIC) system coupled with an ultra-short (5 mm) C-18 monolithic column, as a lower cost and more compact alternative to the HPLC system. A green sample preparation, dilution with a small amount of hexane followed by liquid-liquid extraction with 80% ethanol, was proposed. Very good separation resolution (R s = 3.26), a satisfactory separation time (10 min) and a total run time including column equilibration (16 min) were achieved. The linear working range was found to be 0.4 - 40 μg with R 2 being more than 0.99. The detection limits of both analytes were 0.28 μg with the repeatability within 5% RSD (n = 7). Quantitative analyses of the two analytes in vegetable oil and nutrition supplement samples, using the proposed SIC method, agree well with the results from HPLC.
Piatak, N.M.; Seal, R.R.; Sanzolone, R.F.; Lamothe, P.J.; Brown, Z.A.; Adams, M.
2007-01-01
We report results from sequential extraction experiments and the quantitative mineralogy for samples of stream sediments and mine wastes collected from metal mines. Samples were from the Elizabeth, Ely Copper, and Pike Hill Copper mines in Vermont, the Callahan Mine in Maine, and the Martha Mine in New Zealand. The extraction technique targeted the following operationally defined fractions and solid-phase forms: (1) soluble, adsorbed, and exchangeable fractions; (2) carbonates; (3) organic material; (4) amorphous iron- and aluminum-hydroxides and crystalline manganese-oxides; (5) crystalline iron-oxides; (6) sulfides and selenides; and (7) residual material. For most elements, the sum of an element from all extractions steps correlated well with the original unleached concentration. Also, the quantitative mineralogy of the original material compared to that of the residues from two extraction steps gave insight into the effectiveness of reagents at dissolving targeted phases. The data are presented here with minimal interpretation or discussion and further analyses and interpretation will be presented elsewhere.
On the role of verbalization during task set selection: switching or serial order control?
Bryck, Richard L; Mayr, Ulrich
2005-06-01
Recent task-switching work in which paper-and-pencil administered single-task lists were compared with task-alternation lists has demonstrated large increases in task-switch costs with concurrent articulatory suppression (AS), implicating a crucial role for verbalization during switching (Baddeley, Chincotta, & Adlam, 2001; Emerson & Miyake, 2003). Experiment 1 replicated this result, using computerized assessment, albeit with much smaller effect sizes than in the original reports. In Experiment 2, AS interference was reduced when a sequential cue (spatial location) that indicated the current position in the sequence of task alternations was given. Finally, in Experiment 3, switch trials and no-switch trials were compared within a block of alternating runs of two tasks. Again, AS interference was obtained mainly when the endogenous sequencing demand was high, and it was comparable for no-switch and switch trials. These results suggest that verbalization may be critical for endogenous maintenance and updating of a sequential plan, rather than exclusively for the actual switching process.
Examining age-related movement representations for sequential (fine-motor) finger movements.
Gabbard, Carl; Caçola, Priscila; Bobbio, Tatiana
2011-12-01
Theory suggests that imagined and executed movement planning relies on internal models for action. Using a chronometry paradigm to compare the movement duration of imagined and executed movements, we tested children aged 7-11 years and adults on their ability to perform sequential finger movements. Underscoring this tactic was our desire to gain a better understanding of the age-related ability to create internal models for action requiring fine-motor movements. The task required number recognition and ordering and was presented in three levels of complexity. Results for movement duration indicated that 7-year-olds and adults were different from the other groups with no statistical distinction between 9- and 11-year-olds. Correlation analysis indicated a significant relationship between imagined and executed actions. These results are the first to document the increasing convergence between imagined and executed movements in the context of fine-motor behavior; a finding that adds to our understanding of action representation in children. Copyright © 2011 Elsevier Inc. All rights reserved.
A continuous-time neural model for sequential action.
Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard
2014-11-05
Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.
Integration deficiencies associated with continuous limb movement sequences in Parkinson's disease.
Park, Jin-Hoon; Stelmach, George E
2009-11-01
The present study examined the extent to which Parkinson's disease (PD) influences integration of continuous limb movement sequences. Eight patients with idiopathic PD and 8 age-matched normal subjects were instructed to perform repetitive sequential aiming movements to specified targets under three-accuracy constraints: 1) low accuracy (W = 7 cm) - minimal accuracy constraint, 2) high accuracy (W = 0.64 cm) - maximum accuracy constraint, and 3) mixed accuracy constraint - one target of high accuracy and another target of low accuracy. The characteristic of sequential movements in the low accuracy condition was mostly cyclical, whereas in the high accuracy condition it was discrete in both groups. When the accuracy constraint was mixed, the sequential movements were executed by assembling discrete and cyclical movements in both groups, suggesting that for PD patients the capability to combine discrete and cyclical movements to meet a task requirement appears to be intact. However, such functional linkage was not as pronounced as was in normal subjects. Close examination of movement from the mixed accuracy condition revealed marked movement hesitations in the vicinity of the large target in PD patients, resulting in a bias toward discrete movement. These results suggest that PD patients may have deficits in ongoing planning and organizing processes during movement execution when the tasks require to assemble various accuracy requirements into more complex movement sequences.
Designing a multiple dependent state sampling plan based on the coefficient of variation.
Yan, Aijun; Liu, Sanyang; Dong, Xiaojuan
2016-01-01
A multiple dependent state (MDS) sampling plan is developed based on the coefficient of variation of the quality characteristic which follows a normal distribution with unknown mean and variance. The optimal plan parameters of the proposed plan are solved by a nonlinear optimization model, which satisfies the given producer's risk and consumer's risk at the same time and minimizes the sample size required for inspection. The advantages of the proposed MDS sampling plan over the existing single sampling plan are discussed. Finally an example is given to illustrate the proposed plan.
Gómez-Nieto, Beatriz; Gismera, Mª Jesús; Sevilla, Mª Teresa; Procopio, Jesús R
2017-03-15
A simple method based on FAAS was developed for the sequential multi-element determination of Cu, Zn, Mn, Mg and Si in beverages and food supplements with successful results. The main absorption lines for Cu, Zn and Si and secondary lines for Mn and Mg were selected to carry out the measurements. The sample introduction was performed using a flow injection system. Using the choice of the absorption line wings, the upper limit of the linear range increased up to 110mgL -1 for Mg, 200mgL -1 for Si and 13mgL -1 for Zn. The determination of the five elements was carried out, in triplicate, without the need of additional sample dilutions and/or re-measurements, using less than 3.5mL of sample to perform the complete analysis. The LODs were 0.008mgL -1 for Cu, 0.017mgL -1 for Zn, 0.011mgL -1 for Mn, 0.16mgL -1 for Si and 0.11mgL -1 for Mg. Copyright © 2016 Elsevier Ltd. All rights reserved.
7 CFR 42.104 - Sampling plans and defects.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sampling plans and defects. 42.104 Section 42.104... REGULATIONS STANDARDS FOR CONDITION OF FOOD CONTAINERS Procedures for Stationary Lot Sampling and Inspection § 42.104 Sampling plans and defects. (a) Sampling plans. Sections 42.109 through 42.111 show the number...
7 CFR 42.104 - Sampling plans and defects.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Sampling plans and defects. 42.104 Section 42.104... REGULATIONS STANDARDS FOR CONDITION OF FOOD CONTAINERS Procedures for Stationary Lot Sampling and Inspection § 42.104 Sampling plans and defects. (a) Sampling plans. Sections 42.109 through 42.111 show the number...
Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted
2012-03-01
Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.
Fast approximate delivery of fluence maps for IMRT and VMAT
NASA Astrophysics Data System (ADS)
Balvert, Marleen; Craft, David
2017-02-01
In this article we provide a method to generate the trade-off between delivery time and fluence map matching quality for dynamically delivered fluence maps. At the heart of our method lies a mathematical programming model that, for a given duration of delivery, optimizes leaf trajectories and dose rates such that the desired fluence map is reproduced as well as possible. We begin with the single fluence map case and then generalize the model and the solution technique to the delivery of sequential fluence maps. The resulting large-scale, non-convex optimization problem was solved using a heuristic approach. We test our method using a prostate case and a head and neck case, and present the resulting trade-off curves. Analysis of the leaf trajectories reveals that short time plans have larger leaf openings in general than longer delivery time plans. Our method allows one to explore the continuum of possibilities between coarse, large segment plans characteristic of direct aperture approaches and narrow field plans produced by sliding window approaches. Exposing this trade-off will allow for an informed choice between plan quality and solution time. Further research is required to speed up the optimization process to make this method clinically implementable.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false Sampling. 275.11 Section 275.11 Agriculture... § 275.11 Sampling. (a) Sampling plan. Each State agency shall develop a quality control sampling plan which demonstrates the integrity of its sampling procedures. (1) Content. The sampling plan shall...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 4 2011-01-01 2011-01-01 false Sampling. 275.11 Section 275.11 Agriculture... § 275.11 Sampling. (a) Sampling plan. Each State agency shall develop a quality control sampling plan which demonstrates the integrity of its sampling procedures. (1) Content. The sampling plan shall...
NASA Astrophysics Data System (ADS)
Fabián Calderón Marín, Carlos; González González, Joaquín Jorge; Laguardia, Rodolfo Alfonso
2017-09-01
The combination of radiotherapy modalities with external bundles and systemic radiotherapy (CIERT) could be a reliable alternative for patients with multiple lesions or those where treatment planning maybe difficult because organ(s)-at-risk (OARs) constraints. Radiobiological models should have the capacity for predicting the biological irradiation response considering the differences in the temporal pattern of dose delivering in both modalities. Two CIERT scenarios were studied: sequential combination in which one modality is executed following the other one and concurrent combination when both modalities are running simultaneously. Expressions are provided for calculation of the dose-response magnitudes Tumor Control Probability (TCP) and Normal Tissue Complication Probability (NTCP). General results on radiobiological modeling using the linear-quadratic (LQ) model are also discussed. Inter-subject variation of radiosensitivity and volume irradiation effect in CIERT are studied. OARs should be under control during the planning in concurrent CIERT treatment as the administered activity is increased. The formulation presented here may be used for biological evaluation of prescriptions and biological treatment planning of CIERT schemes in clinical situation.
Student Academic Performance in Undergraduate Managerial-Accounting Courses
ERIC Educational Resources Information Center
Al-Twaijry, Abdulrahman Ali
2010-01-01
The author's purpose was to identify potential factors possibly affecting student performance in three sequential management-accounting courses: Managerial Accounting (MA), Cost Accounting (CA), and Advanced Managerial Accounting (AMA) within the Saudi Arabian context. The sample, which was used to test the developed hypotheses, included 312…
ASSESSING SPECIATION AND RELEASE OF HEAVY METALS FROM COAL COMBUSTION PRODUCTS
In this study, the speciation of heavy metals such as arsenic, selenium, lead, zinc and mercury in coal combustion products (CCPs) was evaluated using sequential extraction procedures. Coal fly ash, bottom ash and flue gas desulphurization (FGD) sludge samples were used in the ex...
Sequential Leaching of Chromium Contaminated Sediments - A Study Characterizing Natural Attenuation
NASA Astrophysics Data System (ADS)
Musa, D.; Ding, M.; Beroff, S.; Rearick, M.; Perkins, G.; WoldeGabriel, G. W.; Ware, D.; Harris, R.; Kluk, E.; Katzman, D.; Reimus, P. W.; Heikoop, J. M.
2015-12-01
Natural attenuation is an important process in slowing down the transport of hexavalent chromium, Cr(VI), an anthropogenic environmental contaminant, either by adsorption of Cr(VI) to sediments, or by reduction to nontoxic trivalent chromium, Cr(III). The capacity and mechanism of attenuation is explored in this sequential leaching study of different particle size fractions of chromium contaminated sediments and similar uncontaminated sediments from the regional aquifer near Los Alamos, New Mexico. Using this leaching protocol each sediment sample is split in two: one half is leached three times using a 0.1 M sodium bicarbonate/carbonate solution, while the second half is leached three times using a 0.01 M nitric acid, followed by two consecutively increasing magnitudes of nitric acid concentrations. Based on the amphoteric nature of chromium, alkaline leaching is used to establish the amount of Cr(VI) sorbed on the sediment, whereas acid leaching is used to establish the amount of Cr(III). The weak acid is predicted to release the attenuated anthropogenic Cr(III), without affecting Cr-bearing minerals. The sequential, stronger, acid is anticipated to leach Cr(III)-incorporated in the minerals. The efficiency and validation of the sequential leaching method is assessed by comparing the leaching behavior of bentonite and biotite samples, with and without loaded Cr(VI). A 97% chromium mass balance of leached Cr(VI)-loaded bentonite and biotite proves the viability of this method for further use on leaching contaminated sediments. By comparing contaminated and uncontaminated sediment leachate results, of chromium and other major and trace elements, the signature of anthropogenic chromium is determined. Further mineralogical characterization of the sediments provides a quantitative measure of the natural attenuation capacity for chromium. Understanding these results is pertinent in delineating the optimal procedure for the remediation of Cr(VI) in the regional aquifer near Los Alamos.
Frames of reference in action plan recall: influence of hand and handedness.
Seegelke, Christian; Hughes, Charmayne M L; Wunsch, Kathrin; van der Wel, Robrecht; Weigelt, Matthias
2015-10-01
Evidence suggests that people are more likely to recall features of previous plans and use them for subsequent movements, rather than generating action plans from scratch for each movement. The information used for plan recall during object manipulation tasks is stored in extrinsic (object-centered) rather than intrinsic (body-centered) coordinates. The present study examined whether action plan recall processes are influenced by manual asymmetries. Right-handed (Experiment 1) and left-handed (Experiment 2) participants grasped a plunger from a home position using either the dominant or the non-dominant hand and placed it at one of the three target positions located at varying heights (home-to-target moves). Subsequently, they stepped sideways down from a podium (step-down podium), onto a podium (step-up podium), or without any podium present (no podium), before returning the plunger to the home platform using the same hand (target-back-to-home moves). The data show that, regardless of hand and handedness, participants grasped the plunger at similar heights during the home-to-target and target-back-to-home moves, even if they had to adopt quite different arm postures to do so. Thus, these findings indicate that the information used for plan recall processes in sequential object manipulation tasks is stored in extrinsic coordinates and in an effector-independent manner.
Polak, Tal; Watson, James E. M.; Fuller, Richard A.; Joseph, Liana N.; Martin, Tara G.; Possingham, Hugh P.; Venter, Oscar; Carwardine, Josie
2015-01-01
The Convention on Biological Diversity (CBD)'s strategic plan advocates the use of environmental surrogates, such as ecosystems, as a basis for planning where new protected areas should be placed. However, the efficiency and effectiveness of this ecosystem-based planning approach to adequately capture threatened species in protected area networks is unknown. We tested the application of this approach in Australia according to the nation's CBD-inspired goals for expansion of the national protected area system. We set targets for ecosystems (10% of the extent of each ecosystem) and threatened species (variable extents based on persistence requirements for each species) and then measured the total land area required and opportunity cost of meeting those targets independently, sequentially and simultaneously. We discover that an ecosystem-based approach will not ensure the adequate representation of threatened species in protected areas. Planning simultaneously for species and ecosystem targets delivered the most efficient outcomes for both sets of targets, while planning first for ecosystems and then filling the gaps to meet species targets was the most inefficient conservation strategy. Our analysis highlights the pitfalls of pursuing goals for species and ecosystems non-cooperatively and has significant implications for nations aiming to meet their CBD mandated protected area obligations. PMID:26064645
Minkina, Tatiana; Nevidomskaya, Dina; Bauer, Tatiana; Shuvaeva, Victoria; Soldatov, Alexander; Mandzhieva, Saglara; Zubavichus, Yan; Trigub, Alexander
2018-09-01
For a correct assessment of risk of polluted soil, it is crucial to establish the speciation and mobility of the contaminants. The aim of this study was to investigate the speciation and transformation of Zn in strongly technogenically transformed contaminated Spolic Technosols for a long time in territory of sludge collectors by combining analytical techniques and synchrotron techniques. Sequential fractionation of Zn compounds in studied soils revealed increasing metal mobility. Phyllosilicates and Fe and Mn hydroxides were the main stabilizers of Zn mobility. A high degree of transformation was identified for the composition of the mineral phase in Spolic Technosols by X-ray powder diffraction. Technogenic phases (Zn-containing authigenic minerals) were revealed in Spolic Technosols samples through the analysis of their Zn K-edge EXAFS and XANES spectra. In one of the samples Zn local environment was formed by predominantly oxygen atoms, and in the other one mixed ZnS and ZnO bonding was found. Zn speciation in the studied technogenically transformed soils was due to the composition of pollutants contaminating the floodplain landscapes for a long time, and, second, this is the combination of physicochemical properties controlling the buffer properties of investigated soils. X-ray spectroscopic and X-ray powder diffraction analyses combined with sequential extraction assays is an effective tool to check the affinity of the soil components for heavy metal cations. Copyright © 2018 Elsevier B.V. All rights reserved.
Lang, Qiaolin; Yin, Long; Shi, Jianguo; Li, Liang; Xia, Lin; Liu, Aihua
2014-01-15
A novel electrochemical sequential biosensor was constructed by co-immobilizing glucoamylase (GA) and glucose oxidase (GOD) on the multi-walled carbon nanotubes (MWNTs)-modified glassy carbon electrode (GCE) by chemical crosslinking method, where glutaraldehyde and bovine serum albumin was used as crosslinking and blocking agent, respectively. The proposed biosensor (GA/GOD/MWNTs/GCE) is capable of determining starch without using extra sensors such as Clark-type oxygen sensor or H2O2 sensor. The current linearly decreased with the increasing concentration of starch ranging from 0.005% to 0.7% (w/w) with the limit of detection of 0.003% (w/w) starch. The as-fabricated sequential biosensor can be applicable to the detection of the content of starch in real samples, which are in good accordance with traditional Fehling's titration. Finally, a stable starch/O2 biofuel cell was assembled using the GA/GOD/MWNTs/GCE as bioanode and laccase/MWNTs/GCE as biocathode, which exhibited open circuit voltage of ca. 0.53 V and the maximum power density of 8.15 μW cm(-2) at 0.31 V, comparable with the other glucose/O2 based biofuel cells reported recently. Therefore, the proposed biosensor exhibited attractive features such as good stability in weak acidic buffer, good operational stability, wide linear range and capable of determination of starch in real samples as well as optimal bioanode for the biofuel cell. Copyright © 2013 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, J; Szczykutowicz, T; Bayouth, J
Purpose: To compare the ability of two dual-energy CT techniques, a novel split-filter single-source technique of superior temporal resolution against an established sequential-scan technique, to remove iodine contrast from images with minimal impact on CT number accuracy. Methods: A phantom containing 8 tissue substitute materials and vials of varying iodine concentrations (1.7–20.1 mg I /mL) was imaged using a Siemens Edge CT scanner. Dual-energy virtual non-contrast (VNC) images were generated using the novel split-filter technique, in which a 120kVp spectrum is filtered by tin and gold to create high- and low-energy spectra with < 1 second temporal separation between themore » acquisition of low- and high-energy data. Additionally, VNC images were generated with the sequential-scan technique (80 and 140kVp) for comparison. CT number accuracy was evaluated for all materials at 15, 25, and 35mGy CTDIvol. Results: The spectral separation was greater for the sequential-scan technique than the split-filter technique with dual-energy ratios of 2.18 and 1.26, respectively. Both techniques successfully removed iodine contrast, resulting in mean CT numbers within 60HU of 0HU (split-filter) and 40HU of 0HU (sequential-scan) for all iodine concentrations. Additionally, for iodine vials of varying diameter (2–20 mm) with the same concentration (9.9 mg I /mL), the system accurately detected iodine for all sizes investigated. Both dual-energy techniques resulted in reduced CT numbers for bone materials (by >400HU for the densest bone). Increasing the imaging dose did not improve the CT number accuracy for bone in VNC images. Conclusion: VNC images from the split-filter technique successfully removed iodine contrast. These results demonstrate a potential for improving dose calculation accuracy and reducing patient imaging dose, while achieving superior temporal resolution in comparison sequential scans. For both techniques, inaccuracies in CT numbers for bone materials necessitate consideration for radiation therapy treatment planning.« less
Lee, Tae Hoon; Hwang, Soon Oh; Choi, Hyun Jong; Jung, Yunho; Cha, Sang Woo; Chung, Il-Kwun; Moon, Jong Ho; Cho, Young Deok; Park, Sang-Heum; Kim, Sun-Joo
2014-02-17
Numerous clinical trials to improve the success rate of biliary access in difficult biliary cannulation (DBC) during ERCP have been reported. However, standard guidelines or sequential protocol analysis according to different methods are limited in place. We planned to investigate a sequential protocol to facilitate selective biliary access for DBC during ERCP. This prospective clinical study enrolled 711 patients with naïve papillae at a tertiary referral center. If wire-guided cannulation was deemed to have failed due to the DBC criteria, then according to the cannulation algorithm early precut fistulotomy (EPF; cannulation time > 5 min, papillary contacts > 5 times, or hook-nose-shaped papilla), double-guidewire cannulation (DGC; unintentional pancreatic duct cannulation ≥ 3 times), and precut after placement of a pancreatic stent (PPS; if DGC was difficult or failed) were performed sequentially. The main outcome measurements were the technical success, procedure outcomes, and complications. Initially, a total of 140 (19.7%) patients with DBC underwent EPF (n = 71) and DGC (n = 69). Then, in DGC group 36 patients switched to PPS due to difficulty criteria. The successful biliary cannulation rate was 97.1% (136/140; 94.4% [67/71] with EPF, 47.8% [33/69] with DGC, and 100% [36/36] with PPS; P < 0.001). The mean successful cannulation time (standard deviation) was 559.4 (412.8) seconds in EPF, 314.8 (65.2) seconds in DGC, and 706.0 (469.4) seconds in PPS (P < 0.05). The DGC group had a relatively low successful cannulation rate (47.8%) but had a shorter cannulation time compared to the other groups due to early switching to the PPS method in difficult or failed DGC. Post-ERCP pancreatitis developed in 14 (10%) patients (9 mild, 1 moderate), which did not differ significantly among the groups (P = 0.870) or compared with the conventional group (P = 0.125). Based on the sequential protocol analysis, EPF, DGC, and PPS may be safe and feasible for DBC. The use of EPF in selected DBC criteria, DGC in unintentional pancreatic duct cannulations, and PPS in failed or difficult DGC may facilitate successful biliary cannulation.
The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging
NASA Astrophysics Data System (ADS)
Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.
2018-06-01
Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.
Marques, Manuel J; Bradu, Adrian; Podoleanu, Adrian Gh
2014-05-01
We report a Talbot bands-based optical coherence tomography (OCT) system capable of producing longitudinal B-scan OCT images and en-face scanning laser ophthalmoscopy (SLO) images of the human retina in-vivo. The OCT channel employs a broadband optical source and a spectrometer. A gap is created between the sample and reference beams while on their way towards the spectrometer's dispersive element to create Talbot bands. The spatial separation of the two beams facilitates collection by an SLO channel of optical power originating exclusively from the retina, deprived from any contribution from the reference beam. Three different modes of operation are presented, constrained by the minimum integration time of the camera used in the spectrometer and by the galvo-scanners' scanning rate: (i) a simultaneous acquisition mode over the two channels, useful for small size imaging, that conserves the pixel-to-pixel correspondence between them; (ii) a hybrid sequential mode, where the system switches itself between the two regimes and (iii) a sequential "on-demand" mode, where the system can be used in either OCT or SLO regimes for as long as required. The two sequential modes present varying degrees of trade-off between pixel-to-pixel correspondence and independent full control of parameters within each channel. Images of the optic nerve and fovea regions obtained in the simultaneous (i) and in the hybrid sequential mode (ii) are presented.
Marques, Manuel J.; Bradu, Adrian; Podoleanu, Adrian Gh.
2014-01-01
We report a Talbot bands-based optical coherence tomography (OCT) system capable of producing longitudinal B-scan OCT images and en-face scanning laser ophthalmoscopy (SLO) images of the human retina in-vivo. The OCT channel employs a broadband optical source and a spectrometer. A gap is created between the sample and reference beams while on their way towards the spectrometer’s dispersive element to create Talbot bands. The spatial separation of the two beams facilitates collection by an SLO channel of optical power originating exclusively from the retina, deprived from any contribution from the reference beam. Three different modes of operation are presented, constrained by the minimum integration time of the camera used in the spectrometer and by the galvo-scanners’ scanning rate: (i) a simultaneous acquisition mode over the two channels, useful for small size imaging, that conserves the pixel-to-pixel correspondence between them; (ii) a hybrid sequential mode, where the system switches itself between the two regimes and (iii) a sequential “on-demand” mode, where the system can be used in either OCT or SLO regimes for as long as required. The two sequential modes present varying degrees of trade-off between pixel-to-pixel correspondence and independent full control of parameters within each channel. Images of the optic nerve and fovea regions obtained in the simultaneous (i) and in the hybrid sequential mode (ii) are presented. PMID:24877006
Park, Henry S; Gross, Cary P; Makarov, Danil V; Yu, James B
2012-08-01
To evaluate the influence of immortal time bias on observational cohort studies of postoperative radiotherapy (PORT) and the effectiveness of sequential landmark analysis to account for this bias. First, we reviewed previous studies of the Surveillance, Epidemiology, and End Results (SEER) database to determine how frequently this bias was considered. Second, we used SEER to select three tumor types (glioblastoma multiforme, Stage IA-IVM0 gastric adenocarcinoma, and Stage II-III rectal carcinoma) for which prospective trials demonstrated an improvement in survival associated with PORT. For each tumor type, we calculated conditional survivals and adjusted hazard ratios of PORT vs. postoperative observation cohorts while restricting the sample at sequential monthly landmarks. Sixty-two percent of previous SEER publications evaluating PORT failed to use a landmark analysis. As expected, delivery of PORT for all three tumor types was associated with improved survival, with the largest associated benefit favoring PORT when all patients were included regardless of survival. Preselecting a cohort with a longer minimum survival sequentially diminished the apparent benefit of PORT. Although the majority of previous SEER articles do not correct for it, immortal time bias leads to altered estimates of PORT effectiveness, which are very sensitive to landmark selection. We suggest the routine use of sequential landmark analysis to account for this bias. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Henry S.; Gross, Cary P.; Makarov, Danil V.
2012-08-01
Purpose: To evaluate the influence of immortal time bias on observational cohort studies of postoperative radiotherapy (PORT) and the effectiveness of sequential landmark analysis to account for this bias. Methods and Materials: First, we reviewed previous studies of the Surveillance, Epidemiology, and End Results (SEER) database to determine how frequently this bias was considered. Second, we used SEER to select three tumor types (glioblastoma multiforme, Stage IA-IVM0 gastric adenocarcinoma, and Stage II-III rectal carcinoma) for which prospective trials demonstrated an improvement in survival associated with PORT. For each tumor type, we calculated conditional survivals and adjusted hazard ratios of PORTmore » vs. postoperative observation cohorts while restricting the sample at sequential monthly landmarks. Results: Sixty-two percent of previous SEER publications evaluating PORT failed to use a landmark analysis. As expected, delivery of PORT for all three tumor types was associated with improved survival, with the largest associated benefit favoring PORT when all patients were included regardless of survival. Preselecting a cohort with a longer minimum survival sequentially diminished the apparent benefit of PORT. Conclusions: Although the majority of previous SEER articles do not correct for it, immortal time bias leads to altered estimates of PORT effectiveness, which are very sensitive to landmark selection. We suggest the routine use of sequential landmark analysis to account for this bias.« less
NASA Technical Reports Server (NTRS)
Kelbaugh, B. N.; Picciolo, G. L.; Chappelle, E. W.; Colburn, M. E. (Inventor)
1973-01-01
An automated apparatus is reported for sequentially assaying urine samples for the presence of bacterial adenosine triphosphate (ATP) that comprises a rotary table which carries a plurality of sample containing vials and automatically dispenses fluid reagents into the vials preparatory to injecting a light producing luciferase-luciferin mixture into the samples. The device automatically measures the light produced in each urine sample by a bioluminescence reaction of the free bacterial adenosine triphosphate with the luciferase-luciferin mixture. The light measured is proportional to the concentration of bacterial adenosine triphosphate which, in turn, is proportional to the number of bacteria present in the respective urine sample.
NASA Astrophysics Data System (ADS)
Chepigin, A.; Leonte, M.; Colombo, F.; Kessler, J. D.
2014-12-01
Dissolved methane, ethane, propane, and butane concentrations in natural waters are traditionally measured using a headspace equilibration technique and gas chromatograph with flame ionization detector (GC-FID). While a relatively simple technique, headspace equilibration suffers from slow equilibration times and loss of sensitivity due to concentration dilution with the pure gas headspace. Here we present a newly developed pre-concentration system and auto-analyzer for use with a GC-FID. This system decreases the time required for each analysis by eliminating the headspace equilibration time, increases the sensitivity and precision with a rapid pre-concentration step, and minimized operator time with an autoanalyzer. In this method, samples are collected from Niskin bottles in newly developed 1 L plastic sample bags rather than glass vials. Immediately following sample collection, the sample bags are placed in an incubator and individually connected to a multiport sampling valve. Water is pumped automatically from the desired sample bag through a small (6.5 mL) Liqui-Cel® membrane contactor where the dissolved gas is vacuum extracted and directly flushed into the GC sample loop. The gases of interest are preferentially extracted with the Liqui-Cel and thus a natural pre-concentration effect is obtained. Daily method calibration is achieved in the field with a five-point calibration curve that is created by analyzing gas standard-spiked water stored in 5 L gas-impermeable bags. Our system has been shown to substantially pre-concentrate the dissolved gases of interest and produce a highly linear response of peak areas to dissolved gas concentration. The system retains the high accuracy, precision, and wide range of measurable concentrations of the headspace equilibration method while simultaneously increasing the sensitivity due to the pre-concentration step. The time and labor involved in the headspace equilibration method is eliminated and replaced with the immediate and automatic analysis of a maximum of 13 sequential samples. The elapsed time between sample collection and analysis is reduced from approximately 12 hrs to < 10 min, enabling dynamic and highly resolved sampling plans.
IT strategic planning in hospitals: from theory to practice.
Jaana, Mirou; Teitelbaum, Mari; Roffey, Tyson
2014-07-01
To date, IT strategic planning has been mostly theory-based with limited information on "best practices" in this area. This study presents the process and outcomes of IT strategic planning undertaken at a pediatric hospital (PH) in Canada. A five-stage sequential and incremental process was adopted. Various tools / approaches were used including review of existing documentation, internal survey (n = 111), fifteen interviews, and twelve workshops. IT strategic planning was informed by 230 individuals (12 percent of hospital community) and revealed consistency in the themes and concerns raised by participants (e.g., slow IT projects delivery rate, lack of understanding of IT priorities, strained communication with IT staff). Mobile and remote access to patients' information, and an integrated EMR were identified as top priorities. The methodology and used approach revealed effective, improved internal relationships, and ensured commitment to the final IT strategic plan. Several lessons were learned including: maintaining a dynamic approach capable of adapting to the fast technology evolution; involving stakeholders and ensuring continuous communication; using effective research tools to support strategic planning; and grounding the process and final product in existing models. This study contributes to the development of "best practices" in IT strategic planning, and illustrates "how" to apply the theoretical principles in this area. This is especially important as IT leaders are encouraged to integrate evidence-based management into their decision making and practices. The methodology and lessons learned may inform practitioners in other hospitals planning to engage in IT strategic planning in the future.
Cain, Joanna M; Felice, Marianne E; Ockene, Judith K; Milner, Robert J; Congdon, John L; Tosi, Stephen; Thorndyke, Luanne E
2018-03-01
Medical school faculty are aging, but few academic health centers are adequately prepared with policies, programs, and resources (PPR) to assist late-career faculty. The authors sought to examine cultural barriers to successful retirement and create alignment between individual and institutional needs and tasks through PPR that embrace the contributions of senior faculty while enabling retirement transitions at the University of Massachusetts Medical School, 2013-2017. Faculty 50 or older were surveyed, programs at other institutions and from the literature (multiple fields) were reviewed, and senior faculty and leaders, including retired faculty, were engaged to develop and implement PPR. Cultural barriers were found to be significant, and a multipronged, multiyear strategy to address these barriers, which sequentially added PPR to support faculty, was put in place. A comprehensive framework of sequenced PPR was developed to address the needs and tasks of late-career transitions within three distinct phases: pre-retirement, retirement, and post-retirement. This sequential introduction approach has led to important outcomes for all three of the retirement phases, including reduction of cultural barriers, a policy that has been useful in assessing viability of proposed phased retirement plans, transparent and realistic discussions about financial issues, and consideration of roles that retired faculty can provide. The authors are tracking the issues mentioned in consultations and efficacy of succession planning, and will be resurveying faculty to further refine their work. This framework approach could serve as a template for other academic health centers to address late-career faculty development.