An adaptive two-stage sequential design for sampling rare and clustered populations
Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.
2008-01-01
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.
Wason, James M. S.; Mander, Adrian P.
2012-01-01
Two-stage designs are commonly used for Phase II trials. Optimal two-stage designs have the lowest expected sample size for a specific treatment effect, for example, the null value, but can perform poorly if the true treatment effect differs. Here we introduce a design for continuous treatment responses that minimizes the maximum expected sample size across all possible treatment effects. The proposed design performs well for a wider range of treatment effects and so is useful for Phase II trials. We compare the design to a previously used optimal design and show it has superior expected sample size properties. PMID:22651118
A Bayesian-frequentist two-stage single-arm phase II clinical trial design.
Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen
2012-08-30
It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J; Murcray, Cassandra Elizabeth; Conti, David
2011-12-01
Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. © 2011 Wiley Periodicals, Inc.
Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J.; Murcray, Cassandra Elizabeth; Conti, David
2014-01-01
Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. PMID:21922541
Optimality, sample size, and power calculations for the sequential parallel comparison design.
Ivanova, Anastasia; Qaqish, Bahjat; Schoenfeld, David A
2011-10-15
The sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials in therapeutic areas where high-placebo response is a concern. The trial is run in two stages, and subjects are randomized into three groups: (i) placebo in both stages; (ii) placebo in the first stage and drug in the second stage; and (iii) drug in both stages. We consider the case of binary response data (response/no response). In the SPCD, all first-stage and second-stage data from placebo subjects who failed to respond in the first stage of the trial are utilized in the efficacy analysis. We develop 1 and 2 degree of freedom score tests for treatment effect in the SPCD. We give formulae for asymptotic power and for sample size computations and evaluate their accuracy via simulation studies. We compute the optimal allocation ratio between drug and placebo in stage 1 for the SPCD to determine from a theoretical viewpoint whether a single-stage design, a two-stage design with placebo only in the first stage, or a two-stage design is the best design for a given set of response rates. As response rates are not known before the trial, a two-stage approach with allocation to active drug in both stages is a robust design choice. Copyright © 2011 John Wiley & Sons, Ltd.
SEMIPARAMETRIC ADDITIVE RISKS REGRESSION FOR TWO-STAGE DESIGN SURVIVAL STUDIES
Li, Gang; Wu, Tong Tong
2011-01-01
In this article we study a semiparametric additive risks model (McKeague and Sasieni (1994)) for two-stage design survival data where accurate information is available only on second stage subjects, a subset of the first stage study. We derive two-stage estimators by combining data from both stages. Large sample inferences are developed. As a by-product, we also obtain asymptotic properties of the single stage estimators of McKeague and Sasieni (1994) when the semiparametric additive risks model is misspecified. The proposed two-stage estimators are shown to be asymptotically more efficient than the second stage estimators. They also demonstrate smaller bias and variance for finite samples. The developed methods are illustrated using small intestine cancer data from the SEER (Surveillance, Epidemiology, and End Results) Program. PMID:21931467
Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.
2011-01-01
Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.
Estimating accuracy of land-cover composition from two-stage cluster sampling
Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.
2009-01-01
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.
Residents Living in Residential Care Facilities: United States, 2010
... NSRCF used a stratified two-stage probability sample design. The first stage was the selection of RCFs ... was 99%. A detailed description of NSRCF sampling design, data collection, and procedures is provided both in ...
Guided transect sampling - a new design combining prior information and field surveying
Anna Ringvall; Goran Stahl; Tomas Lamas
2000-01-01
Guided transect sampling is a two-stage sampling design in which prior information is used to guide the field survey in the second stage. In the first stage, broad strips are randomly selected and divided into grid-cells. For each cell a covariate value is estimated from remote sensing data, for example. The covariate is the basis for subsampling of a transect through...
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.
2004-01-01
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.
Ivanova, Anastasia; Tamura, Roy N
2015-12-01
A new clinical trial design, designated the two-way enriched design (TED), is introduced, which augments the standard randomized placebo-controlled trial with second-stage enrichment designs in placebo non-responders and drug responders. The trial is run in two stages. In the first stage, patients are randomized between drug and placebo. In the second stage, placebo non-responders are re-randomized between drug and placebo and drug responders are re-randomized between drug and placebo. All first-stage data, and second-stage data from first-stage placebo non-responders and first-stage drug responders, are utilized in the efficacy analysis. The authors developed one, two and three degrees of freedom score tests for treatment effect in the TED and give formulae for asymptotic power and for sample size computations. The authors compute the optimal allocation ratio between drug and placebo in the first stage for the TED and compare the operating characteristics of the design to the standard parallel clinical trial, placebo lead-in and randomized withdrawal designs. Two motivating examples from different disease areas are presented to illustrate the possible design considerations. © The Author(s) 2011.
Whitehead, John; Valdés-Márquez, Elsa; Lissmats, Agneta
2009-01-01
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail. Copyright 2008 John Wiley & Sons, Ltd.
A modified varying-stage adaptive phase II/III clinical trial design.
Dong, Gaohong; Vandemeulebroecke, Marc
2016-07-01
Conventionally, adaptive phase II/III clinical trials are carried out with a strict two-stage design. Recently, a varying-stage adaptive phase II/III clinical trial design has been developed. In this design, following the first stage, an intermediate stage can be adaptively added to obtain more data, so that a more informative decision can be made. Therefore, the number of further investigational stages is determined based upon data accumulated to the interim analysis. This design considers two plausible study endpoints, with one of them initially designated as the primary endpoint. Based on interim results, another endpoint can be switched as the primary endpoint. However, in many therapeutic areas, the primary study endpoint is well established. Therefore, we modify this design to consider one study endpoint only so that it may be more readily applicable in real clinical trial designs. Our simulations show that, the same as the original design, this modified design controls the Type I error rate, and the design parameters such as the threshold probability for the two-stage setting and the alpha allocation ratio in the two-stage setting versus the three-stage setting have a great impact on the design characteristics. However, this modified design requires a larger sample size for the initial stage, and the probability of futility becomes much higher when the threshold probability for the two-stage setting gets smaller. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.
Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping
2015-06-07
Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.
Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E
2016-06-01
Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.
Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161
NASA Technical Reports Server (NTRS)
Naesset, Erik; Gobakken, Terje; Bollandsas, Ole Martin; Gregoire, Timothy G.; Nelson, Ross; Stahl, Goeran
2013-01-01
Airborne scanning LiDAR (Light Detection and Ranging) has emerged as a promising tool to provide auxiliary data for sample surveys aiming at estimation of above-ground tree biomass (AGB), with potential applications in REDD forest monitoring. For larger geographical regions such as counties, states or nations, it is not feasible to collect airborne LiDAR data continuously ("wall-to-wall") over the entire area of interest. Two-stage cluster survey designs have therefore been demonstrated by which LiDAR data are collected along selected individual flight-lines treated as clusters and with ground plots sampled along these LiDAR swaths. Recently, analytical AGB estimators and associated variance estimators that quantify the sampling variability have been proposed. Empirical studies employing these estimators have shown a seemingly equal or even larger uncertainty of the AGB estimates obtained with extensive use of LiDAR data to support the estimation as compared to pure field-based estimates employing estimators appropriate under simple random sampling (SRS). However, comparison of uncertainty estimates under SRS and sophisticated two-stage designs is complicated by large differences in the designs and assumptions. In this study, probability-based principles to estimation and inference were followed. We assumed designs of a field sample and a LiDAR-assisted survey of Hedmark County (HC) (27,390 km2), Norway, considered to be more comparable than those assumed in previous studies. The field sample consisted of 659 systematically distributed National Forest Inventory (NFI) plots and the airborne scanning LiDAR data were collected along 53 parallel flight-lines flown over the NFI plots. We compared AGB estimates based on the field survey only assuming SRS against corresponding estimates assuming two-phase (double) sampling with LiDAR and employing model-assisted estimators. We also compared AGB estimates based on the field survey only assuming two-stage sampling (the NFI plots being grouped in clusters) against corresponding estimates assuming two-stage sampling with the LiDAR and employing model-assisted estimators. For each of the two comparisons, the standard errors of the AGB estimates were consistently lower for the LiDAR-assisted designs. The overall reduction of the standard errors in the LiDAR-assisted estimation was around 40-60% compared to the pure field survey. We conclude that the previously proposed two-stage model-assisted estimators are inappropriate for surveys with unequal lengths of the LiDAR flight-lines and new estimators are needed. Some options for design of LiDAR-assisted sample surveys under REDD are also discussed, which capitalize on the flexibility offered when the field survey is designed as an integrated part of the overall survey design as opposed to previous LiDAR-assisted sample surveys in the boreal and temperate zones which have been restricted by the current design of an existing NFI.
A two-stage design for multiple testing in large-scale association studies.
Wen, Shu-Hui; Tzeng, Jung-Ying; Kao, Jau-Tsuen; Hsiao, Chuhsing Kate
2006-01-01
Modern association studies often involve a large number of markers and hence may encounter the problem of testing multiple hypotheses. Traditional procedures are usually over-conservative and with low power to detect mild genetic effects. From the design perspective, we propose a two-stage selection procedure to address this concern. Our main principle is to reduce the total number of tests by removing clearly unassociated markers in the first-stage test. Next, conditional on the findings of the first stage, which uses a less stringent nominal level, a more conservative test is conducted in the second stage using the augmented data and the data from the first stage. Previous studies have suggested using independent samples to avoid inflated errors. However, we found that, after accounting for the dependence between these two samples, the true discovery rate increases substantially. In addition, the cost of genotyping can be greatly reduced via this approach. Results from a study of hypertriglyceridemia and simulations suggest the two-stage method has a higher overall true positive rate (TPR) with a controlled overall false positive rate (FPR) when compared with single-stage approaches. We also report the analytical form of its overall FPR, which may be useful in guiding study design to achieve a high TPR while retaining the desired FPR.
Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure
Salehi, M.; Smith, D.R.
2005-01-01
Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H
2015-11-30
We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.
Development of a Multiple-Stage Differential Mobility Analyzer (MDMA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Da-Ren; Cheng, Mengdawn
2007-01-01
A new DMA column has been designed with the capability of simultaneously extracting monodisperse particles of different sizes in multiple stages. We call this design a multistage DMA, or MDMA. A prototype MDMA has been constructed and experimentally evaluated in this study. The new column enables the fast measurement of particles in a wide size range, while preserving the powerful particle classification function of a DMA. The prototype MDMA has three sampling stages, capable of classifying monodisperse particles of three different sizes simultaneously. The scanning voltage operation of a DMA can be applied to this new column. Each stage ofmore » MDMA column covers a fraction of the entire particle size range to be measured. The covered size fractions of two adjacent stages of the MDMA are designed somewhat overlapped. The arrangement leads to the reduction of scanning voltage range and thus the cycling time of the measurement. The modular sampling stage design of the MDMA allows the flexible configuration of desired particle classification lengths and variable number of stages in the MDMA. The design of our MDMA also permits operation at high sheath flow, enabling high-resolution particle size measurement and/or reduction of the lower sizing limit. Using the tandem DMA technique, the performance of the MDMA, i.e., sizing accuracy, resolution, and transmission efficiency, was evaluated at different ratios of aerosol and sheath flowrates. Two aerosol sampling schemes were investigated. One was to extract aerosol flows at an evenly partitioned flowrate at each stage, and the other was to extract aerosol at a rate the same as the polydisperse aerosol flowrate at each stage. We detail the prototype design of the MDMA and the evaluation result on the transfer functions of the MDMA at different particle sizes and operational conditions.« less
NASA Technical Reports Server (NTRS)
Kalton, G.
1983-01-01
A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.
A note on sample size calculation for mean comparisons based on noncentral t-statistics.
Chow, Shein-Chung; Shao, Jun; Wang, Hansheng
2002-11-01
One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.
Chen, Zhijian; Craiu, Radu V; Bull, Shelley B
2014-11-01
In focused studies designed to follow up associations detected in a genome-wide association study (GWAS), investigators can proceed to fine-map a genomic region by targeted sequencing or dense genotyping of all variants in the region, aiming to identify a functional sequence variant. For the analysis of a quantitative trait, we consider a Bayesian approach to fine-mapping study design that incorporates stratification according to a promising GWAS tag SNP in the same region. Improved cost-efficiency can be achieved when the fine-mapping phase incorporates a two-stage design, with identification of a smaller set of more promising variants in a subsample taken in stage 1, followed by their evaluation in an independent stage 2 subsample. To avoid the potential negative impact of genetic model misspecification on inference we incorporate genetic model selection based on posterior probabilities for each competing model. Our simulation study shows that, compared to simple random sampling that ignores genetic information from GWAS, tag-SNP-based stratified sample allocation methods reduce the number of variants continuing to stage 2 and are more likely to promote the functional sequence variant into confirmation studies. © 2014 WILEY PERIODICALS, INC.
Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S
2015-11-10
In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.
A Bayesian predictive two-stage design for phase II clinical trials.
Sambucini, Valeria
2008-04-15
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.
Compact cold stage for micro-computerized tomography imaging of chilled or frozen samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hullar, Ted; Anastasio, Cort, E-mail: canastasio@ucdavis.edu; Paige, David F.
2014-04-15
High resolution X-ray microCT (computerized tomography) can be used to image a variety of objects, including temperature-sensitive materials. In cases where the sample must be chilled or frozen to maintain sample integrity, either the microCT machine itself must be placed in a refrigerated chamber, or a relatively expensive commercial cold stage must be purchased. We describe here the design and construction of a low-cost custom cold stage suitable for use in a microCT imaging system. Our device uses a boron nitride sample holder, two-stage Peltier cooler, fan-cooled heat sink, and electronic controller to maintain sample temperatures as low as −25 °Cmore » ± 0.2 °C for the duration of a tomography acquisition. The design does not require modification to the microCT machine, and is easily installed and removed. Our custom cold stage represents a cost-effective solution for refrigerating CT samples for imaging, and is especially useful for shared equipment or machines unsuitable for cold room use.« less
Hagel, Brent E
2011-04-01
To provide an overview of the two-stage case-control study design and its potential application to ED injury surveillance data and to apply this approach to published ED data on the relation between brain injury and bicycle helmet use. Relevant background is presented on injury aetiology and case-control methodology with extension to the two-stage case-control design in the context of ED injury surveillance. The design is then applied to data from a published case-control study of the relation between brain injury and bicycle helmet use with motor vehicle involvement considered as a potential confounder. Taking into account the additional sampling at the second stage, the adjusted and corrected odds ratio and 95% confidence interval for the brain injury-helmet use relation is presented and compared with the estimate from the entire original dataset. Contexts where the two-stage case-control study design might be most appropriately applied to ED injury surveillance data are suggested. The adjusted odds ratio for the relation between brain injury and bicycle helmet use based on all data (n = 2833) from the original study was 0.34 (95% CI 0.25 to 0.46) compared with an estimate from a two-stage case-control design of 0.35 (95% CI 0.25 to 0.48) using only a fraction of the original subjects (n = 480). Application of the two-stage case-control study design to ED injury surveillance data has the potential to dramatically reduce study time and resource costs with acceptable losses in statistical efficiency.
Two-stage phase II oncology designs using short-term endpoints for early stopping.
Kunz, Cornelia U; Wason, James Ms; Kieser, Meinhard
2017-08-01
Phase II oncology trials are conducted to evaluate whether the tumour activity of a new treatment is promising enough to warrant further investigation. The most commonly used approach in this context is a two-stage single-arm design with binary endpoint. As for all designs with interim analysis, its efficiency strongly depends on the relation between recruitment rate and follow-up time required to measure the patients' outcomes. Usually, recruitment is postponed after the sample size of the first stage is achieved up until the outcomes of all patients are available. This may lead to a considerable increase of the trial length and with it to a delay in the drug development process. We propose a design where an intermediate endpoint is used in the interim analysis to decide whether or not the study is continued with a second stage. Optimal and minimax versions of this design are derived. The characteristics of the proposed design in terms of type I error rate, power, maximum and expected sample size as well as trial duration are investigated. Guidance is given on how to select the most appropriate design. Application is illustrated by a phase II oncology trial in patients with advanced angiosarcoma, which motivated this research.
Improved minimum cost and maximum power two stage genome-wide association study designs.
Stanhope, Stephen A; Skol, Andrew D
2012-01-01
In a two stage genome-wide association study (2S-GWAS), a sample of cases and controls is allocated into two groups, and genetic markers are analyzed sequentially with respect to these groups. For such studies, experimental design considerations have primarily focused on minimizing study cost as a function of the allocation of cases and controls to stages, subject to a constraint on the power to detect an associated marker. However, most treatments of this problem implicitly restrict the set of feasible designs to only those that allocate the same proportions of cases and controls to each stage. In this paper, we demonstrate that removing this restriction can improve the cost advantages demonstrated by previous 2S-GWAS designs by up to 40%. Additionally, we consider designs that maximize study power with respect to a cost constraint, and show that recalculated power maximizing designs can recover a substantial amount of the planned study power that might otherwise be lost if study funding is reduced. We provide open source software for calculating cost minimizing or power maximizing 2S-GWAS designs.
High pressure studies using two-stage diamond micro-anvils grown by chemical vapor deposition
Vohra, Yogesh K.; Samudrala, Gopi K.; Moore, Samuel L.; ...
2015-06-10
Ultra-high static pressures have been achieved in the laboratory using a two-stage micro-ball nanodiamond anvils as well as a two-stage micro-paired diamond anvils machined using a focused ion-beam system. The two-stage diamond anvils’ designs implemented thus far suffer from a limitation of one diamond anvil sliding past another anvil at extreme conditions. We describe a new method of fabricating two-stage diamond micro-anvils using a tungsten mask on a standard diamond anvil followed by microwave plasma chemical vapor deposition (CVD) homoepitaxial diamond growth. A prototype two stage diamond anvil with 300 μm culet and with a CVD diamond second stage ofmore » 50 μm in diameter was fabricated. We have carried out preliminary high pressure X-ray diffraction studies on a sample of rare-earth metal lutetium sample with a copper pressure standard to 86 GPa. Furthermore, the micro-anvil grown by CVD remained intact during indentation of gasket as well as on decompression from the highest pressure of 86 GPa.« less
A two-dimensional biased coin design for dual-agent dose-finding trials.
Sun, Zhichao; Braun, Thomas M
2015-12-01
Given the limited efficacy observed with single agents, there is growing interest in Phase I clinical trial designs that allow for identification of the maximum tolerated combination of two agents. Existing parametric designs may suffer from over- or under-parameterization. Thus, we have designed a nonparametric approach that can be easily understood and implemented for combination trials. We propose a two-stage adaptive biased coin design that extends existing methods for single-agent trials to dual-agent dose-finding trials. The basic idea of our design is to divide the entire trial into two stages and apply the biased coin design, with modification, in each stage. We compare the operating characteristics of our design to four competing parametric approaches via simulation in several numerical examples. Under all simulation scenarios we have examined, our method performs well in terms of identification of the maximum tolerated combination and allocation of patients relative to the performance of its competitors. In our design, stopping rule criteria and the distribution of the total sample size among the two stages are context-dependent, and both need careful consideration before adopting our design in practice. Efficacy is not a part of the dose-assignment algorithm, nor used to define the maximum tolerated combination. Our design inherits the favorable statistical properties of the biased coin design, is competitive with existing designs, and promotes patient safety by limiting patient exposure to toxic combinations whenever possible. © The Author(s) 2015.
Segura-Correa, J C; Domínguez-Díaz, D; Avalos-Ramírez, R; Argaez-Sosa, J
2010-09-01
Knowledge of the intraherd correlation coefficient (ICC) and design (D) effect for infectious diseases could be of interest in sample size calculation and to provide the correct standard errors of prevalence estimates in cluster or two-stage samplings surveys. Information on 813 animals from 48 non-vaccinated cow-calf herds from North-eastern Mexico was used. The ICC for the bovine viral diarrhoea (BVD), infectious bovine rhinotracheitis (IBR), leptospirosis and neosporosis diseases were calculated using a Bayesian approach adjusting for the sensitivity and specificity of the diagnostic tests. The ICC and D values for BVD, IBR, leptospirosis and neosporosis were 0.31 and 5.91, 0.18 and 3.88, 0.22 and 4.53, and 0.11 and 2.68, respectively. The ICC and D values were different from 0 and D greater than 1, therefore large sample sizes are required to obtain the same precision in prevalence estimates than for a random simple sampling design. The report of ICC and D values is of great help in planning and designing two-stage sampling studies. 2010 Elsevier B.V. All rights reserved.
Stehman, S.V.; Wickham, J.D.; Wade, T.G.; Smith, J.H.
2008-01-01
The database design and diverse application of NLCD 2001 pose significant challenges for accuracy assessment because numerous objectives are of interest, including accuracy of land-cover, percent urban imperviousness, percent tree canopy, land-cover composition, and net change. A multi-support approach is needed because these objectives require spatial units of different sizes for reference data collection and analysis. Determining a sampling design that meets the full suite of desirable objectives for the NLCD 2001 accuracy assessment requires reconciling potentially conflicting design features that arise from targeting the different objectives. Multi-stage cluster sampling provides the general structure to achieve a multi-support assessment, and the flexibility to target different objectives at different stages of the design. We describe the implementation of two-stage cluster sampling for the initial phase of the NLCD 2001 assessment, and identify gaps in existing knowledge where research is needed to allow full implementation of a multi-objective, multi-support assessment. ?? 2008 American Society for Photogrammetry and Remote Sensing.
An optimal stratified Simon two-stage design.
Parashar, Deepak; Bowden, Jack; Starr, Colin; Wernisch, Lorenz; Mander, Adrian
2016-07-01
In Phase II oncology trials, therapies are increasingly being evaluated for their effectiveness in specific populations of interest. Such targeted trials require designs that allow for stratification based on the participants' molecular characterisation. A targeted design proposed by Jones and Holmgren (JH) Jones CL, Holmgren E: 'An adaptive Simon two-stage design for phase 2 studies of targeted therapies', Contemporary Clinical Trials 28 (2007) 654-661.determines whether a drug only has activity in a disease sub-population or in the wider disease population. Their adaptive design uses results from a single interim analysis to decide whether to enrich the study population with a subgroup or not; it is based on two parallel Simon two-stage designs. We study the JH design in detail and extend it by providing a few alternative ways to control the familywise error rate, in the weak sense as well as the strong sense. We also introduce a novel optimal design by minimising the expected sample size. Our extended design contributes to the much needed framework for conducting Phase II trials in stratified medicine. © 2016 The Authors Pharmaceutical Statistics Published by John Wiley & Sons Ltd. © 2016 The Authors Pharmaceutical Statistics Published by John Wiley & Sons Ltd.
Methodological proposal for the remediation of a site affected by phosphogypsum deposits
NASA Astrophysics Data System (ADS)
Martínez-Sanchez, M. J.; Perez-Sirvent, C.; Bolivar, J. P.; Garcia-Tenorio, R.
2012-04-01
The accumulation of phosphogysum (PY) produces a well known environmental problems. The proposals for the remediation of these sites require multidisciplinary and very specific studies. Since they cover large areas a sampling design specifically outlined for each case is necessary in order the contaminants, transfer pathways and particular processes can be correctly identified. In addition to a suitable sampling of the soil, aquatic medium and biota, appropriate studies of the space-temporal variations by means of control samples are required. Two different stages should be considered: 1.- Diagnostic stage This stage includes preliminary studies, identification of possible sources of radiosotopes, design of the appropriate sampling plan, hydrogeological study, characterization and study of the space-temporal variability of radioisotopes and other contaminants, as well as the risk assessement for health and ecosystems, that depends on the future use of the site. 2.- Remediation proposal stage It comprises the evaluation and comparison of the different procedures for the decontamination/remediation, including models experiments at the laboratory. To this respect, the preparation and detailed study of a small scale pilot project is a task of particular relevance. In this way the suitability of the remediating technology can be checked, and its performance optimized. These two stages allow a technically well-founded proposal to be presented to the Organisms or Institutions in charge of the problem and facilitate decision-making. It both stages be included in a social communication campaign in order the final proposal be accepted by stakeholders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colby, Robert J.; Alsem, Daan H.; Liyu, Andrey V.
2015-06-01
The development of environmental transmission electron microscopy (TEM) has enabled in situ experiments in a gaseous environment with high resolution imaging and spectroscopy. Addressing scientific challenges in areas such as catalysis, corrosion, and geochemistry can require pressures much higher than the ~20 mbar achievable with a differentially pumped, dedicated environmental TEM. Gas flow stages, in which the environment is contained between two semi-transparent thin membrane windows, have been demonstrated at pressures of several atmospheres. While this constitutes significant progress towards operando measurements, the design of many current gas flow stages is such that the pressure at the sample cannot necessarilymore » be directly inferred from the pressure differential across the system. Small differences in the setup and design of the gas flow stage can lead to very different sample pressures. We demonstrate a method for measuring the gas pressure directly, using a combination of electron energy loss spectroscopy and TEM imaging. This method requires only two energy filtered TEM images, limiting the measurement time to a few seconds and can be performed during an ongoing experiment at the region of interest. This approach provides a means to ensure reproducibility between different experiments, and even between very differently designed gas flow stages.« less
National accident sampling system sample design, phases 2 and 3 : executive summary
DOT National Transportation Integrated Search
1979-11-01
This report describes the Phase 2 and 3 sample design for the : National Accident Sampling System (NASS). It recommends a procedure : for the first-stage selection of Primary Sampling Units (PSU's) and : the second-stage design for the selection of a...
A versatile rotary-stage high frequency probe station for studying magnetic films and devices
NASA Astrophysics Data System (ADS)
He, Shikun; Meng, Zhaoliang; Huang, Lisen; Yap, Lee Koon; Zhou, Tiejun; Panagopoulos, Christos
2016-07-01
We present a rotary-stage microwave probe station suitable for magnetic films and spintronic devices. Two stages, one for field rotation from parallel to perpendicular to the sample plane (out-of-plane) and the other intended for field rotation within the sample plane (in-plane) have been designed. The sample probes and micro-positioners are rotated simultaneously with the stages, which allows the field orientation to cover θ from 0∘ to 90∘ and φ from 0∘ to 360∘. θ and φ being the angle between the direction of current flow and field in a out-of-plane and an in-plane rotation, respectively. The operation frequency is up to 40 GHz and the magnetic field up to 1 T. The sample holder vision system and probe assembly are compactly designed for the probes to land on a wafer with diameter up to 3 cm. Using homemade multi-pin probes and commercially available high frequency probes, several applications including 4-probe DC measurements, the determination of domain wall velocity, and spin transfer torque ferromagnetic resonance are demonstrated.
Galway, Lp; Bell, Nathaniel; Sae, Al Shatari; Hagopian, Amy; Burnham, Gilbert; Flaxman, Abraham; Weiss, Wiliam M; Rajaratnam, Julie; Takaro, Tim K
2012-04-27
Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.
2012-01-01
Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings. PMID:22540266
The China Mental Health Survey: II. Design and field procedures.
Liu, Zhaorui; Huang, Yueqin; Lv, Ping; Zhang, Tingting; Wang, Hong; Li, Qiang; Yan, Jie; Yu, Yaqin; Kou, Changgui; Xu, Xiufeng; Lu, Jin; Wang, Zhizhong; Qiu, Hongyan; Xu, Yifeng; He, Yanling; Li, Tao; Guo, Wanjun; Tian, Hongjun; Xu, Guangming; Xu, Xiangdong; Ma, Yanjuan; Wang, Linhong; Wang, Limin; Yan, Yongping; Wang, Bo; Xiao, Shuiyuan; Zhou, Liang; Li, Lingjiang; Tan, Liwen; Chen, Hongguang; Ma, Chao
2016-11-01
China Mental Health Survey (CMHS), which was carried out from July 2013 to March 2015, was the first national representative community survey of mental disorders and mental health services in China using computer-assisted personal interview (CAPI). Face-to-face interviews were finished in the homes of respondents who were selected from a nationally representative multi-stage disproportionate stratified sampling procedure. Sample selection was integrated with the National Chronic Disease and Risk Factor Surveillance Survey administered by the National Centre for Chronic and Non-communicable Disease Control and Prevention in 2013, which made it possible to obtain both physical and mental health information of Chinese community population. One-stage design of data collection was used in the CMHS to obtain the information of mental disorders, including mood disorders, anxiety disorders, and substance use disorders, while two-stage design was applied for schizophrenia and other psychotic disorders, and dementia. A total of 28,140 respondents finished the survey with 72.9% of the overall response rate. This paper describes the survey mode, fieldwork organization, procedures, and the sample design and weighting of the CMHS. Detailed information is presented on the establishment of a new payment scheme for interviewers, results of the quality control in both stages, and evaluations to the weighting.
Ho, Lindsey A; Lange, Ethan M
2010-12-01
Genome-wide association (GWA) studies are a powerful approach for identifying novel genetic risk factors associated with human disease. A GWA study typically requires the inclusion of thousands of samples to have sufficient statistical power to detect single nucleotide polymorphisms that are associated with only modest increases in risk of disease given the heavy burden of a multiple test correction that is necessary to maintain valid statistical tests. Low statistical power and the high financial cost of performing a GWA study remains prohibitive for many scientific investigators anxious to perform such a study using their own samples. A number of remedies have been suggested to increase statistical power and decrease cost, including the utilization of free publicly available genotype data and multi-stage genotyping designs. Herein, we compare the statistical power and relative costs of alternative association study designs that use cases and screened controls to study designs that are based only on, or additionally include, free public control genotype data. We describe a novel replication-based two-stage study design, which uses free public control genotype data in the first stage and follow-up genotype data on case-matched controls in the second stage that preserves many of the advantages inherent when using only an epidemiologically matched set of controls. Specifically, we show that our proposed two-stage design can substantially increase statistical power and decrease cost of performing a GWA study while controlling the type-I error rate that can be inflated when using public controls due to differences in ancestry and batch genotype effects.
A Simple Analytic Model for Estimating Mars Ascent Vehicle Mass and Performance
NASA Technical Reports Server (NTRS)
Woolley, Ryan C.
2014-01-01
The Mars Ascent Vehicle (MAV) is a crucial component in any sample return campaign. In this paper we present a universal model for a two-stage MAV along with the analytic equations and simple parametric relationships necessary to quickly estimate MAV mass and performance. Ascent trajectories can be modeled as two-burn transfers from the surface with appropriate loss estimations for finite burns, steering, and drag. Minimizing lift-off mass is achieved by balancing optimized staging and an optimized path-to-orbit. This model allows designers to quickly find optimized solutions and to see the effects of design choices.
ERIC Educational Resources Information Center
Li, Tiandong
2012-01-01
In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…
Two-sample binary phase 2 trials with low type I error and low sample size
Litwin, Samuel; Basickes, Stanley; Ross, Eric A.
2017-01-01
Summary We address design of two-stage clinical trials comparing experimental and control patients. Our end-point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p0 and alternative that it is p0 among controls and p1 > p0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E ≥ m, with two-sample rules of the form E – C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. PMID:28118686
A versatile rotary-stage high frequency probe station for studying magnetic films and devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Shikun; Division of Physics and Applied Physics, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 637371; Meng, Zhaoliang
We present a rotary-stage microwave probe station suitable for magnetic films and spintronic devices. Two stages, one for field rotation from parallel to perpendicular to the sample plane (out-of-plane) and the other intended for field rotation within the sample plane (in-plane) have been designed. The sample probes and micro-positioners are rotated simultaneously with the stages, which allows the field orientation to cover θ from 0{sup ∘} to 90{sup ∘} and φ from 0{sup ∘} to 360{sup ∘}. θ and φ being the angle between the direction of current flow and field in a out-of-plane and an in-plane rotation, respectively. Themore » operation frequency is up to 40 GHz and the magnetic field up to 1 T. The sample holder vision system and probe assembly are compactly designed for the probes to land on a wafer with diameter up to 3 cm. Using homemade multi-pin probes and commercially available high frequency probes, several applications including 4-probe DC measurements, the determination of domain wall velocity, and spin transfer torque ferromagnetic resonance are demonstrated.« less
Rothmann, Mark
2005-01-01
When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.
Yoon, Dukyong; Kim, Hyosil; Suh-Kim, Haeyoung; Park, Rae Woong; Lee, KiYoung
2011-01-01
Microarray analyses based on differentially expressed genes (DEGs) have been widely used to distinguish samples across different cellular conditions. However, studies based on DEGs have not been able to clearly determine significant differences between samples of pathophysiologically similar HIV-1 stages, e.g., between acute and chronic progressive (or AIDS) or between uninfected and clinically latent stages. We here suggest a novel approach to allow such discrimination based on stage-specific genetic features of HIV-1 infection. Our approach is based on co-expression changes of genes known to interact. The method can identify a genetic signature for a single sample as contrasted with existing protein-protein-based analyses with correlational designs. Our approach distinguishes each sample using differentially co-expressed interacting protein pairs (DEPs) based on co-expression scores of individual interacting pairs within a sample. The co-expression score has positive value if two genes in a sample are simultaneously up-regulated or down-regulated. And the score has higher absolute value if expression-changing ratios are similar between the two genes. We compared characteristics of DEPs with that of DEGs by evaluating their usefulness in separation of HIV-1 stage. And we identified DEP-based network-modules and their gene-ontology enrichment to find out the HIV-1 stage-specific gene signature. Based on the DEP approach, we observed clear separation among samples from distinct HIV-1 stages using clustering and principal component analyses. Moreover, the discrimination power of DEPs on the samples (70-100% accuracy) was much higher than that of DEGs (35-45%) using several well-known classifiers. DEP-based network analysis also revealed the HIV-1 stage-specific network modules; the main biological processes were related to "translation," "RNA splicing," "mRNA, RNA, and nucleic acid transport," and "DNA metabolism." Through the HIV-1 stage-related modules, changing stage-specific patterns of protein interactions could be observed. DEP-based method discriminated the HIV-1 infection stages clearly, and revealed a HIV-1 stage-specific gene signature. The proposed DEP-based method might complement existing DEG-based approaches in various microarray expression analyses.
Evaluation of portable air samplers for monitoring airborne culturable bacteria
NASA Technical Reports Server (NTRS)
Mehta, S. K.; Bell-Robinson, D. M.; Groves, T. O.; Stetzenbach, L. D.; Pierson, D. L.
2000-01-01
Airborne culturable bacteria were monitored at five locations (three in an office/laboratory building and two in a private residence) in a series of experiments designed to compare the efficiency of four air samplers: the Andersen two-stage, Burkard portable, RCS Plus, and SAS Super 90 samplers. A total of 280 samples was collected. The four samplers were operated simultaneously, each sampling 100 L of air with collection on trypticase soy agar. The data were corrected by applying positive hole conversion factors for the Burkard portable, Andersen two-stage, and SAS Super 90 air samplers, and were expressed as log10 values prior to statistical analysis by analysis of variance. The Burkard portable air sampler retrieved the highest number of airborne culturable bacteria at four of the five sampling sites, followed by the SAS Super 90 and the Andersen two-stage impactor. The number of bacteria retrieved by the RCS Plus was significantly less than those retrieved by the other samplers. Among the predominant bacterial genera retrieved by all samplers were Staphylococcus, Bacillus, Corynebacterium, Micrococcus, and Streptococcus.
Applying the Hájek Approach in Formula-Based Variance Estimation. Research Report. ETS RR-17-24
ERIC Educational Resources Information Center
Qian, Jiahe
2017-01-01
The variance formula derived for a two-stage sampling design without replacement employs the joint inclusion probabilities in the first-stage selection of clusters. One of the difficulties encountered in data analysis is the lack of information about such joint inclusion probabilities. One way to solve this issue is by applying Hájek's…
An Australian Version of the Neighborhood Environment Walkability Scale: Validity Evidence
ERIC Educational Resources Information Center
Cerin, Ester; Leslie, Eva; Owen, Neville; Bauman, Adrian
2008-01-01
This study examined validity evidence for the Australian version of the Neighborhood Environment Walkability Scale (NEWS-AU). A stratified two-stage cluster sampling design was used to recruit 2,650 adults from Adelaide (Australia). The sample was drawn from residential addresses within eight high-walkable and eight low-walkable suburbs matched…
Two-sample binary phase 2 trials with low type I error and low sample size.
Litwin, Samuel; Basickes, Stanley; Ross, Eric A
2017-04-30
We address design of two-stage clinical trials comparing experimental and control patients. Our end point is success or failure, however measured, with null hypothesis that the chance of success in both arms is p 0 and alternative that it is p 0 among controls and p 1 > p 0 among experimental patients. Standard rules will have the null hypothesis rejected when the number of successes in the (E)xperimental arm, E, sufficiently exceeds C, that among (C)ontrols. Here, we combine one-sample rejection decision rules, E⩾m, with two-sample rules of the form E - C > r to achieve two-sample tests with low sample number and low type I error. We find designs with sample numbers not far from the minimum possible using standard two-sample rules, but with type I error of 5% rather than 15% or 20% associated with them, and of equal power. This level of type I error is achieved locally, near the stated null, and increases to 15% or 20% when the null is significantly higher than specified. We increase the attractiveness of these designs to patients by using 2:1 randomization. Examples of the application of this new design covering both high and low success rates under the null hypothesis are provided. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Lestini, Giulia; Dumont, Cyrielle; Mentré, France
2015-01-01
Purpose In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e. when no adaptation is performed, using wrong prior parameters. Methods We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Results Estimation results of two-stage ADs and ξ* were close and much better than those obtained with ξ0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three-and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Conclusions Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement. PMID:26123680
Lestini, Giulia; Dumont, Cyrielle; Mentré, France
2015-10-01
In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e., when no adaptation is performed, using wrong prior parameters. We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Estimation results of two-stage ADs and ξ * were close and much better than those obtained with ξ 0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three- and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement.
Improving the accuracy of livestock distribution estimates through spatial interpolation.
Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy
2012-11-01
Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P <0.009 based on a sample of 2,077 parishes using one-stage stratified samples). During aggregation, area-weighted mean values were assigned to higher administrative unit levels. However, when this step is preceded by a spatial interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level). Whether the same observations apply on a lower spatial scale should be further investigated.
Number of pins in two-stage stratified sampling for estimating herbage yield
William G. O' Regan; C. Eugene Conrad
1975-01-01
In a two-stage stratified procedure for sampling herbage yield, plots are stratified by a pin frame in stage one, and clipped. In stage two, clippings from selected plots are sorted, dried, and weighed. Sample size and distribution of plots between the two stages are determined by equations. A way to compute the effect of number of pins on the variance of estimated...
El-Zein, Mariam; Conus, Florence; Benedetti, Andrea; Parent, Marie-Elise; Rousseau, Marie-Claude
2016-01-01
When using administrative databases for epidemiologic research, a subsample of subjects can be interviewed, eliciting information on undocumented confounders. This article presents a thorough investigation of the validity of a two-stage sample encompassing an assessment of nonparticipation and quantification of the extent of bias. Established through record linkage of administrative databases, the Québec Birth Cohort on Immunity and Health (n = 81,496) aims to study the association between Bacillus Calmette-Guérin vaccination and asthma. Among 76,623 subjects classified in four Bacillus Calmette-Guérin-asthma strata, a two-stage sampling strategy with a balanced design was used to randomly select individuals for interviews. We compared stratum-specific sociodemographic characteristics and healthcare utilization of stage 2 participants (n = 1,643) with those of eligible nonparticipants (n = 74,980) and nonrespondents (n = 3,157). We used logistic regression to determine whether participation varied across strata according to these characteristics. The effect of nonparticipation was described by the relative odds ratio (ROR = ORparticipants/ORsource population) for the association between sociodemographic characteristics and asthma. Parental age at childbirth, area of residence, family income, and healthcare utilization were comparable between groups. Participants were slightly more likely to be women and have a mother born in Québec. Participation did not vary across strata by sex, parental birthplace, or material and social deprivation. Estimates were not biased by nonparticipation; most RORs were below one and bias never exceeded 20%. Our analyses evaluate and provide a detailed demonstration of the validity of a two-stage sample for researchers assembling similar research infrastructures.
Silverman, Rachel K; Ivanova, Anastasia
2017-01-01
Sequential parallel comparison design (SPCD) was proposed to reduce placebo response in a randomized trial with placebo comparator. Subjects are randomized between placebo and drug in stage 1 of the trial, and then, placebo non-responders are re-randomized in stage 2. Efficacy analysis includes all data from stage 1 and all placebo non-responding subjects from stage 2. This article investigates the possibility to re-estimate the sample size and adjust the design parameters, allocation proportion to placebo in stage 1 of SPCD, and weight of stage 1 data in the overall efficacy test statistic during an interim analysis.
A Pilot Sampling Design for Estimating Outdoor Recreation Site Visits on the National Forests
Stanley J. Zarnoch; S.M. Kocis; H. Ken Cordell; D.B.K. English
2002-01-01
A pilot sampling design is described for estimating site visits to National Forest System lands. The three-stage sampling design consisted of national forest ranger districts, site days within ranger districts, and last-exiting recreation visitors within site days. Stratification was used at both the primary and secondary stages. Ranger districts were stratified based...
Arabi, Yaseen M; Alothman, Adel; Balkhy, Hanan H; Al-Dawood, Abdulaziz; AlJohani, Sameera; Al Harbi, Shmeylan; Kojan, Suleiman; Al Jeraisy, Majed; Deeb, Ahmad M; Assiri, Abdullah M; Al-Hameed, Fahad; AlSaedi, Asim; Mandourah, Yasser; Almekhlafi, Ghaleb A; Sherbeeni, Nisreen Murad; Elzein, Fatehi Elnour; Memon, Javed; Taha, Yusri; Almotairi, Abdullah; Maghrabi, Khalid A; Qushmaq, Ismael; Al Bshabshe, Ali; Kharaba, Ayman; Shalhoub, Sarah; Jose, Jesna; Fowler, Robert A; Hayden, Frederick G; Hussein, Mohamed A
2018-01-30
It had been more than 5 years since the first case of Middle East Respiratory Syndrome coronavirus infection (MERS-CoV) was recorded, but no specific treatment has been investigated in randomized clinical trials. Results from in vitro and animal studies suggest that a combination of lopinavir/ritonavir and interferon-β1b (IFN-β1b) may be effective against MERS-CoV. The aim of this study is to investigate the efficacy of treatment with a combination of lopinavir/ritonavir and recombinant IFN-β1b provided with standard supportive care, compared to treatment with placebo provided with standard supportive care in patients with laboratory-confirmed MERS requiring hospital admission. The protocol is prepared in accordance with the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) guidelines. Hospitalized adult patients with laboratory-confirmed MERS will be enrolled in this recursive, two-stage, group sequential, multicenter, placebo-controlled, double-blind randomized controlled trial. The trial is initially designed to include 2 two-stage components. The first two-stage component is designed to adjust sample size and determine futility stopping, but not efficacy stopping. The second two-stage component is designed to determine efficacy stopping and possibly readjustment of sample size. The primary outcome is 90-day mortality. This will be the first randomized controlled trial of a potential treatment for MERS. The study is sponsored by King Abdullah International Medical Research Center, Riyadh, Saudi Arabia. Enrollment for this study began in November 2016, and has enrolled thirteen patients as of Jan 24-2018. ClinicalTrials.gov, ID: NCT02845843 . Registered on 27 July 2016.
Kim, J-J; Joo, S H; Lee, K S; Yoo, J H; Park, M S; Kwak, J S; Lee, Jinho
2017-04-01
The Low Temperature Scanning Tunneling Microscope (LT-STM) is an extremely valuable tool not only in surface science but also in condensed matter physics. For years, numerous new ideas have been adopted to perfect LT-STM performances-Ultra-Low Vibration (ULV) laboratory and the rigid STM head design are among them. Here, we present three improvements for the design of the ULV laboratory and the LT-STM: tip treatment stage, sample cleaving stage, and vibration isolation system. The improved tip treatment stage enables us to perform field emission for the purpose of tip treatment in situ without exchanging samples, while our enhanced sample cleaving stage allows us to cleave samples at low temperature in a vacuum without optical access by a simple pressing motion. Our newly designed vibration isolation system provides efficient space usage while maintaining vibration isolation capability. These improvements enhance the quality of spectroscopic imaging experiments that can last for many days and provide increased data yield, which we expect can be indispensable elements in future LT-STM designs.
Shi, Haolun; Yin, Guosheng
2018-02-21
Simon's two-stage design is one of the most commonly used methods in phase II clinical trials with binary endpoints. The design tests the null hypothesis that the response rate is less than an uninteresting level, versus the alternative hypothesis that the response rate is greater than a desirable target level. From a Bayesian perspective, we compute the posterior probabilities of the null and alternative hypotheses given that a promising result is declared in Simon's design. Our study reveals that because the frequentist hypothesis testing framework places its focus on the null hypothesis, a potentially efficacious treatment identified by rejecting the null under Simon's design could have only less than 10% posterior probability of attaining the desirable target level. Due to the indifference region between the null and alternative, rejecting the null does not necessarily mean that the drug achieves the desirable response level. To clarify such ambiguity, we propose a Bayesian enhancement two-stage (BET) design, which guarantees a high posterior probability of the response rate reaching the target level, while allowing for early termination and sample size saving in case that the drug's response rate is smaller than the clinically uninteresting level. Moreover, the BET design can be naturally adapted to accommodate survival endpoints. We conduct extensive simulation studies to examine the empirical performance of our design and present two trial examples as applications. © 2018, The International Biometric Society.
Rigamonti, Ivo E; Brambilla, Carla; Colleoni, Emanuele; Jermini, Mauro; Trivellone, Valeria; Baumgärtner, Johann
2016-04-01
The paper deals with the study of the spatial distribution and the design of sampling plans for estimating nymph densities of the grape leafhopper Scaphoideus titanus Ball in vine plant canopies. In a reference vineyard sampled for model parameterization, leaf samples were repeatedly taken according to a multistage, stratified, random sampling procedure, and data were subjected to an ANOVA. There were no significant differences in density neither among the strata within the vineyard nor between the two strata with basal and apical leaves. The significant differences between densities on trunk and productive shoots led to the adoption of two-stage (leaves and plants) and three-stage (leaves, shoots, and plants) sampling plans for trunk shoots- and productive shoots-inhabiting individuals, respectively. The mean crowding to mean relationship used to analyze the nymphs spatial distribution revealed aggregated distributions. In both the enumerative and the sequential enumerative sampling plans, the number of leaves of trunk shoots, and of leaves and shoots of productive shoots, was kept constant while the number of plants varied. In additional vineyards data were collected and used to test the applicability of the distribution model and the sampling plans. The tests confirmed the applicability 1) of the mean crowding to mean regression model on the plant and leaf stages for representing trunk shoot-inhabiting distributions, and on the plant, shoot, and leaf stages for productive shoot-inhabiting nymphs, 2) of the enumerative sampling plan, and 3) of the sequential enumerative sampling plan. In general, sequential enumerative sampling was more cost efficient than enumerative sampling.
NASA Technical Reports Server (NTRS)
Tomberlin, T. J.
1985-01-01
Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.
A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application
NASA Astrophysics Data System (ADS)
Roy, Sounak; Banerjee, Swapna
2018-03-01
This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.
A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application
NASA Astrophysics Data System (ADS)
Roy, Sounak; Banerjee, Swapna
2018-06-01
This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.
Matano, Francesca; Sambucini, Valeria
2016-11-01
In phase II single-arm studies, the response rate of the experimental treatment is typically compared with a fixed target value that should ideally represent the true response rate for the standard of care therapy. Generally, this target value is estimated through previous data, but the inherent variability in the historical response rate is not taken into account. In this paper, we present a Bayesian procedure to construct single-arm two-stage designs that allows to incorporate uncertainty in the response rate of the standard treatment. In both stages, the sample size determination criterion is based on the concepts of conditional and predictive Bayesian power functions. Different kinds of prior distributions, which play different roles in the designs, are introduced, and some guidelines for their elicitation are described. Finally, some numerical results about the performance of the designs are provided and a real data example is illustrated. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
Robin M. Reich; Hans T. Schreuder
2006-01-01
The sampling strategy involving both statistical and in-place inventory information is presented for the natural resources project of the Green Belt area (Centuron Verde) in the Mexican state of Jalisco. The sampling designs used were a grid based ground sample of a 90x90 m plot and a two-stage stratified sample of 30 x 30 m plots. The data collected were used to...
NASA Astrophysics Data System (ADS)
Toda, Ryo; Murakawa, Satoshi; Fukuyama, Hiroshi
2018-03-01
Sub-mK temperatures are achievable by a copper nuclear demagnetization refrigerator (NDR). Recently, research demands for such an ultra-low temperature environment are increasing not only in condensed matter physics but also in astrophysics. A standard NDR requires a specially designed room, a high-field superconducting magnet, and a high-power dilution refrigerator (DR). And it is a one-shot cooling apparatus. To reduce these requirements, we are developing a compact and continuous NDR with two PrNi5 nuclear stages which occupies only a small space next to an appropriate pre-cooling stage such as DR. PrNi5 has a large magnetic-field enhancement on Pr3+ nuclei due to the strong hyperfine coupling. This enables us to enclose each stage in a miniature superconducting magnet and to locate two such sets in close proximity by surrounding them with high-permeability magnetic shields. The two stages are thermally connected in series to the pre-cooling stage by two Zn superconducting heat switches. A numerical analysis taking account of thermal resistances of all parts and an eddy current heating shows that the lowest sample temperature of 0.8 mK can be maintained continuously under a 10 nW ambient heat leak.
Design of single piece sabot for a single stage gas gun
NASA Astrophysics Data System (ADS)
Vemparala, Vignesh; Mathew, Arun Tom; Rao Koka, Tirumala
2017-11-01
Single piece sabot is a vital part in single stage gas guns for impact testing in aerospace industries. Depending on the type of projectile used the design of sabot varies to accommodate the testing equipment. The velocity of the projectile exiting the barrel is dependent on the material and shape of the sabot used. The material selected for the design of sabot is rigid polyurethane foam, due to their low elastic modulus and low density. Two samples of rigid PU foam is taken and tests are performed to get their exact material properties. These properties are incorporated in numerical simulation to determine the best fit for practical use. Since the PU foams has a wide range of porosity which plays a prominent role in deciding the exit velocity and accuracy of the projectile coming out of the barrel. By optimisation, to the best suitable material sample can be determined.
ERIC Educational Resources Information Center
Sebro, Negusse Yohannes; Goshu, Ayele Taye
2017-01-01
This study aims to explore Bayesian multilevel modeling to investigate variations of average academic achievement of grade eight school students. A sample of 636 students is randomly selected from 26 private and government schools by a two-stage stratified sampling design. Bayesian method is used to estimate the fixed and random effects. Input and…
NASA Technical Reports Server (NTRS)
Khorram, S.
1977-01-01
Results are presented of a study intended to develop a general location-specific remote-sensing procedure for watershed-wide estimation of water loss to the atmosphere by evaporation and transpiration. The general approach involves a stepwise sequence of required information definition (input data), appropriate sample design, mathematical modeling, and evaluation of results. More specifically, the remote sensing-aided system developed to evaluate evapotranspiration employs a basic two-stage two-phase sample of three information resolution levels. Based on the discussed design, documentation, and feasibility analysis to yield timely, relatively accurate, and cost-effective evapotranspiration estimates on a watershed or subwatershed basis, work is now proceeding to implement this remote sensing-aided system.
Efficiency of a new bioaerosol sampler in sampling Betula pollen for antigen analyses.
Rantio-Lehtimäki, A; Kauppinen, E; Koivikko, A
1987-01-01
A new bioaerosol sampler consisting of Liu-type atmospheric aerosol sampling inlet, coarse particle inertial impactor, two-stage high-efficiency virtual impactor (aerodynamic particle sizes respectively in diameter: greater than or equal to 8 microns, 8-2.5 microns, and 2.5 microns; sampling on filters) and a liquid-cooled condenser was designed, fabricated and field-tested in sampling birch (Betula) pollen grains and smaller particles containing Betula antigens. Both microscopical (pollen counts) and immunochemical (enzyme-linked immunosorbent assay) analyses of each stage were carried out. The new sampler was significantly more efficient than Burkard trap e.g. in sampling particles of Betula pollen size (ca. 25 microns in diameter). This was prominent during pollen peak periods (e.g. May 19th, 1985, in the virtual impactor 9482 and in the Burkard trap 2540 Betula p.g. X m-3 of air). Betula antigens were detected also in filter stages where no intact pollen grains were found; in the condenser unit the antigen concentrations instead were very low.
ERIC Educational Resources Information Center
Haavelsrud, Magnus
A study was designed to test the hypothesis that different communication stages between nations--primitive, traditional, modern, and neomodern--provide important variables for explaining differences in pre-adults' conception of war in different countries. Although the two samples used in the study were drawn from two cultures which fall into the…
A screening questionnaire for convulsive seizures: A three-stage field-validation in rural Bolivia.
Giuliano, Loretta; Cicero, Calogero Edoardo; Crespo Gómez, Elizabeth Blanca; Padilla, Sandra; Bruno, Elisa; Camargo, Mario; Marin, Benoit; Sofia, Vito; Preux, Pierre-Marie; Strohmeyer, Marianne; Bartoloni, Alessandro; Nicoletti, Alessandra
2017-01-01
Epilepsy is one of the most common neurological diseases in Latin American Countries (LAC) and epilepsy associated with convulsive seizures is the most frequent type. Therefore, the detection of convulsive seizures is a priority, but a validated Spanish-language screening tool to detect convulsive seizures is not available. We performed a field validation to evaluate the accuracy of a Spanish-language questionnaire to detect convulsive seizures in rural Bolivia using a three-stage design. The questionnaire was also administered face-to-face, using a two-stage design, to evaluate the difference in accuracy. The study was carried out in the rural communities of the Gran Chaco region. The questionnaire consists of a single screening question directed toward the householders and a confirmatory section administered face-to-face to the index case. Positive subjects underwent a neurological examination to detect false positive and true positive subjects. To estimate the proportion of false negative, a random sample of about 20% of the screened negative underwent a neurological evaluation. 792 householders have been interviewed representing a population of 3,562 subjects (52.2% men; mean age 24.5 ± 19.7 years). We found a sensitivity of 76.3% (95% CI 59.8-88.6) with a specificity of 99.6% (95% CI 99.4-99.8). The two-stage design showed only a slightly higher sensitivity respect to the three-stage design. Our screening tool shows a good accuracy and can be easily used by trained health workers to quickly screen the population of the rural communities of LAC through the householders using a three-stage design.
Extending cluster Lot Quality Assurance Sampling designs for surveillance programs
Hund, Lauren; Pagano, Marcello
2014-01-01
Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance based on the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible non-parametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. PMID:24633656
Extending cluster lot quality assurance sampling designs for surveillance programs.
Hund, Lauren; Pagano, Marcello
2014-07-20
Lot quality assurance sampling (LQAS) has a long history of applications in industrial quality control. LQAS is frequently used for rapid surveillance in global health settings, with areas classified as poor or acceptable performance on the basis of the binary classification of an indicator. Historically, LQAS surveys have relied on simple random samples from the population; however, implementing two-stage cluster designs for surveillance sampling is often more cost-effective than simple random sampling. By applying survey sampling results to the binary classification procedure, we develop a simple and flexible nonparametric procedure to incorporate clustering effects into the LQAS sample design to appropriately inflate the sample size, accommodating finite numbers of clusters in the population when relevant. We use this framework to then discuss principled selection of survey design parameters in longitudinal surveillance programs. We apply this framework to design surveys to detect rises in malnutrition prevalence in nutrition surveillance programs in Kenya and South Sudan, accounting for clustering within villages. By combining historical information with data from previous surveys, we design surveys to detect spikes in the childhood malnutrition rate. Copyright © 2014 John Wiley & Sons, Ltd.
On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.
ERIC Educational Resources Information Center
Wilcox, Rand R.
1984-01-01
Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)
Hyun, Noorie; Gastwirth, Joseph L; Graubard, Barry I
2018-03-26
Originally, 2-stage group testing was developed for efficiently screening individuals for a disease. In response to the HIV/AIDS epidemic, 1-stage group testing was adopted for estimating prevalences of a single or multiple traits from testing groups of size q, so individuals were not tested. This paper extends the methodology of 1-stage group testing to surveys with sample weighted complex multistage-cluster designs. Sample weighted-generalized estimating equations are used to estimate the prevalences of categorical traits while accounting for the error rates inherent in the tests. Two difficulties arise when using group testing in complex samples: (1) How does one weight the results of the test on each group as the sample weights will differ among observations in the same group. Furthermore, if the sample weights are related to positivity of the diagnostic test, then group-level weighting is needed to reduce bias in the prevalence estimation; (2) How does one form groups that will allow accurate estimation of the standard errors of prevalence estimates under multistage-cluster sampling allowing for intracluster correlation of the test results. We study 5 different grouping methods to address the weighting and cluster sampling aspects of complex designed samples. Finite sample properties of the estimators of prevalences, variances, and confidence interval coverage for these grouping methods are studied using simulations. National Health and Nutrition Examination Survey data are used to illustrate the methods. Copyright © 2018 John Wiley & Sons, Ltd.
An estimator of the survival function based on the semi-Markov model under dependent censorship.
Lee, Seung-Yeoun; Tsai, Wei-Yann
2005-06-01
Lee and Wolfe (Biometrics vol. 54 pp. 1176-1178, 1998) proposed the two-stage sampling design for testing the assumption of independent censoring, which involves further follow-up of a subset of lost-to-follow-up censored subjects. They also proposed an adjusted estimator for the survivor function for a proportional hazards model under the dependent censoring model. In this paper, a new estimator for the survivor function is proposed for the semi-Markov model under the dependent censorship on the basis of the two-stage sampling data. The consistency and the asymptotic distribution of the proposed estimator are derived. The estimation procedure is illustrated with an example of lung cancer clinical trial and simulation results are reported of the mean squared errors of estimators under a proportional hazards and two different nonproportional hazards models.
A Gas-Spring-Loaded X-Y-Z Stage System for X-ray Microdiffraction Sample Manipulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shu Deming; Cai Zhonghou; Lai, Barry
2007-01-19
We have designed and constructed a gas-spring-loaded x-y-z stage system for x-ray microdiffraction sample manipulation at the Advanced Photon Source XOR 2-ID-D station. The stage system includes three DC-motor-driven linear stages and a gas-spring-based heavy preloading structure, which provides antigravity forces to ensure that the stage system keeps high-positioning performance under variable goniometer orientation. Microdiffraction experiments with this new stage system showed significant sample manipulation performance improvement.
A Gas-Spring-Loaded X-Y-Z Stage System for X-ray Microdiffraction Sample Manipulation
NASA Astrophysics Data System (ADS)
Shu, Deming; Cai, Zhonghou; Lai, Barry
2007-01-01
We have designed and constructed a gas-spring-loaded x-y-z stage system for x-ray microdiffraction sample manipulation at the Advanced Photon Source XOR 2-ID-D station. The stage system includes three DC-motor-driven linear stages and a gas-spring-based heavy preloading structure, which provides antigravity forces to ensure that the stage system keeps high-positioning performance under variable goniometer orientation. Microdiffraction experiments with this new stage system showed significant sample manipulation performance improvement.
HASA: Hypersonic Aerospace Sizing Analysis for the Preliminary Design of Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Harloff, Gary J.; Berkowitz, Brian M.
1988-01-01
A review of the hypersonic literature indicated that a general weight and sizing analysis was not available for hypersonic orbital, transport, and fighter vehicles. The objective here is to develop such a method for the preliminary design of aerospace vehicles. This report describes the developed methodology and provides examples to illustrate the model, entitled the Hypersonic Aerospace Sizing Analysis (HASA). It can be used to predict the size and weight of hypersonic single-stage and two-stage-to-orbit vehicles and transports, and is also relevant for supersonic transports. HASA is a sizing analysis that determines vehicle length and volume, consistent with body, fuel, structural, and payload weights. The vehicle component weights are obtained from statistical equations for the body, wing, tail, thermal protection system, landing gear, thrust structure, engine, fuel tank, hydraulic system, avionics, electral system, equipment payload, and propellant. Sample size and weight predictions are given for the Space Shuttle orbiter and other proposed vehicles, including four hypersonic transports, a Mach 6 fighter, a supersonic transport (SST), a single-stage-to-orbit (SSTO) vehicle, a two-stage Space Shuttle with a booster and an orbiter, and two methane-fueled vehicles.
Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz
2014-07-01
Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Estimating Accuracy of Land-Cover Composition From Two-Stage Clustering Sampling
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), ...
An adaptive two-stage dose-response design method for establishing proof of concept.
Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R
2013-01-01
We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.
Computational Fluid Dynamics (CFD) Analysis for the Reduction of Impeller Discharge Flow Distortion
NASA Technical Reports Server (NTRS)
Garcia, R.; McConnaughey, P. K.; Eastland, A.
1993-01-01
The use of Computational Fluid Dynamics (CFD) in the design and analysis of high performance rocket engine pumps has increased in recent years. This increase has been aided by the activities of the Marshall Space Flight Center (MSFC) Pump Stage Technology Team (PSTT). The team's goals include assessing the accuracy and efficiency of several methodologies and then applying the appropriate methodology(s) to understand and improve the flow inside a pump. The PSTT's objectives, team membership, and past activities are discussed in Garcia1 and Garcia2. The PSTT is one of three teams that form the NASA/MSFC CFD Consortium for Applications in Propulsion Technology (McConnaughey3). The PSTT first applied CFD in the design of the baseline consortium impeller. This impeller was designed for the Space Transportation Main Engine's (STME) fuel turbopump. The STME fuel pump was designed with three impeller stages because a two-stage design was deemed to pose a high developmental risk. The PSTT used CFD to design an impeller whose performance allowed for a two-stage STME fuel pump design. The availability of this design would have lead to a reduction in parts, weight, and cost had the STME reached production. One sample of the baseline consortium impeller was manufactured and tested in a water rig. The test data showed that the impeller performance was as predicted and that a two-stage design for the STME fuel pump was possible with minimal risk. The test data also verified another CFD predicted characteristic of the design that was not desirable. The classical 'jet-wake' pattern at the impeller discharge was strengthened by two aspects of the design: by the high head coefficient necessary for the required pressure rise and by the relatively few impeller exit blades, 12, necessary to reduce manufacturing cost. This 'jet-wake pattern produces an unsteady loading on the diffuser vanes and has, in past rocket engine programs, lead to diffuser structural failure. In industrial applications, this problem is typically avoided by increasing the space between the impeller and the diffuser to allow the dissipation of this pattern and, hence, the reduction of diffuser vane unsteady loading. This approach leads to small performance losses and, more importantly in rocket engine applications, to significant increases in the pump's size and weight. This latter consideration typically makes this approach unacceptable in high performance rocket engines.
Interim analyses in 2 x 2 crossover trials.
Cook, R J
1995-09-01
A method is presented for performing interim analyses in long term 2 x 2 crossover trials with serial patient entry. The analyses are based on a linear statistic that combines data from individuals observed for one treatment period with data from individuals observed for both periods. The coefficients in this linear combination can be chosen quite arbitrarily, but we focus on variance-based weights to maximize power for tests regarding direct treatment effects. The type I error rate of this procedure is controlled by utilizing the joint distribution of the linear statistics over analysis stages. Methods for performing power and sample size calculations are indicated. A two-stage sequential design involving simultaneous patient entry and a single between-period interim analysis is considered in detail. The power and average number of measurements required for this design are compared to those of the usual crossover trial. The results indicate that, while there is minimal loss in power relative to the usual crossover design in the absence of differential carry-over effects, the proposed design can have substantially greater power when differential carry-over effects are present. The two-stage crossover design can also lead to more economical studies in terms of the expected number of measurements required, due to the potential for early stopping. Attention is directed toward normally distributed responses.
ERIC Educational Resources Information Center
Nyambo, Brigitte; Ligate, Elly
2013-01-01
Purpose: To identify and review production and marketing information sources and flows for smallholder cashew (Anacardium occidentale L.) growers in Tanzania and recommend systems improvements for better technology uptake. Design/methodology/approach: Two-stage purposive samples were drawn. First, two districts in the main cashew producing areas,…
An internal pilot design for prospective cancer screening trials with unknown disease prevalence.
Brinton, John T; Ringham, Brandy M; Glueck, Deborah H
2015-10-13
For studies that compare the diagnostic accuracy of two screening tests, the sample size depends on the prevalence of disease in the study population, and on the variance of the outcome. Both parameters may be unknown during the design stage, which makes finding an accurate sample size difficult. To solve this problem, we propose adapting an internal pilot design. In this adapted design, researchers will accrue some percentage of the planned sample size, then estimate both the disease prevalence and the variances of the screening tests. The updated estimates of the disease prevalence and variance are used to conduct a more accurate power and sample size calculation. We demonstrate that in large samples, the adapted internal pilot design produces no Type I inflation. For small samples (N less than 50), we introduce a novel adjustment of the critical value to control the Type I error rate. We apply the method to two proposed prospective cancer screening studies: 1) a small oral cancer screening study in individuals with Fanconi anemia and 2) a large oral cancer screening trial. Conducting an internal pilot study without adjusting the critical value can cause Type I error rate inflation in small samples, but not in large samples. An internal pilot approach usually achieves goal power and, for most studies with sample size greater than 50, requires no Type I error correction. Further, we have provided a flexible and accurate approach to bound Type I error below a goal level for studies with small sample size.
Health care planning and education via gaming-simulation: a two-stage experiment.
Gagnon, J H; Greenblat, C S
1977-01-01
A two-stage process of gaming-simulation design was conducted: the first stage of design concerned national planning for hemophilia care; the second stage of design was for gaming-simulation concerning the problems of hemophilia patients and health care providers. The planning design was intended to be adaptable to large-scale planning for a variety of health care problems. The educational game was designed using data developed in designing the planning game. A broad range of policy-makers participated in the planning game.
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
Duration of Sleep and ADHD Tendency among Adolescents in China
ERIC Educational Resources Information Center
Lam, Lawrence T.; Yang, L.
2008-01-01
Objective: This study investigates the association between duration of sleep and ADHD tendency among adolescents. Method: This population-based health survey uses a two-stage random cluster sampling design. Participants ages 13 to 17 are recruited from the total population of adolescents attending high school in one city of China. Duration of…
Logging utilization in Idaho: Current and past trends
Eric A. Simmons; Todd A. Morgan; Erik C. Berg; Stanley J. Zarnoch; Steven W. Hayes; Mike T. Thompson
2014-01-01
A study of commercial timber-harvesting activities in Idaho was conducted during 2008 and 2011 to characterize current tree utilization, logging operations, and changes from previous Idaho logging utilization studies. A two-stage simple random sampling design was used to select sites and felled trees for measurement within active logging sites. Thirty-three logging...
Selection of the initial design for the two-stage continual reassessment method.
Jia, Xiaoyu; Ivanova, Anastasia; Lee, Shing M
2017-01-01
In the two-stage continual reassessment method (CRM), model-based dose escalation is preceded by a pre-specified escalating sequence starting from the lowest dose level. This is appealing to clinicians because it allows a sufficient number of patients to be assigned to each of the lower dose levels before escalating to higher dose levels. While a theoretical framework to build the two-stage CRM has been proposed, the selection of the initial dose-escalating sequence, generally referred to as the initial design, remains arbitrary, either by specifying cohorts of three patients or by trial and error through extensive simulations. Motivated by a currently ongoing oncology dose-finding study for which clinicians explicitly stated their desire to assign at least one patient to each of the lower dose levels, we proposed a systematic approach for selecting the initial design for the two-stage CRM. The initial design obtained using the proposed algorithm yields better operating characteristics compared to using a cohort of three initial design with a calibrated CRM. The proposed algorithm simplifies the selection of initial design for the two-stage CRM. Moreover, initial designs to be used as reference for planning a two-stage CRM are provided.
Maurer, Willi; Jones, Byron; Chen, Ying
2018-05-10
In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one-sided test (TOST) procedure, which depends, among other things, on the estimated within-subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within-subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse-normal combination of p-values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first-stage results. This necessitates a discussion and exploitation of the peculiar properties of the power curve of the TOST testing procedure. We illustrate our approach with an example based on a real ABE study and compare the operating characteristics of our proposed method with those of Method B of Povin et al. Copyright © 2018 John Wiley & Sons, Ltd.
A novel modular ANN architecture for efficient monitoring of gases/odours in real-time
NASA Astrophysics Data System (ADS)
Mishra, A.; Rajput, N. S.
2018-04-01
Data pre-processing is tremendously used for enhanced classification of gases. However, it suppresses the concentration variances of different gas samples. A classical solution of using single artificial neural network (ANN) architecture is also inefficient and renders degraded quantification. In this paper, a novel modular ANN design has been proposed to provide an efficient and scalable solution in real–time. Here, two separate ANN blocks viz. classifier block and quantifier block have been used to provide efficient and scalable gas monitoring in real—time. The classifier ANN consists of two stages. In the first stage, the Net 1-NDSRT has been trained to transform raw sensor responses into corresponding virtual multi-sensor responses using normalized difference sensor response transformation (NDSRT). These responses have been fed to the second stage (i.e., Net 2-classifier ). The Net 2-classifier has been trained to classify various gas samples to their respective class. Further, the quantifier block has parallel ANN modules, multiplexed to quantify each gas. Therefore, the classifier ANN decides class and quantifier ANN decides the exact quantity of the gas/odor present in the respective sample of that class.
Two-Stage Variable Sample-Rate Conversion System
NASA Technical Reports Server (NTRS)
Tkacenko, Andre
2009-01-01
A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.
Nagra, Navraj S; Hamilton, Thomas W; Ganatra, Sameer; Murray, David W; Pandit, Hemant
2016-10-01
Infection complicating total knee arthroplasty (TKA) has serious implications. Traditionally the debate on whether one- or two-stage exchange arthroplasty is the optimum management of infected TKA has favoured two-stage procedures; however, a paradigm shift in opinion is emerging. This study aimed to establish whether current evidence supports one-stage revision for managing infected TKA based on reinfection rates and functional outcomes post-surgery. MEDLINE/PubMed and CENTRAL databases were reviewed for studies that compared one- and two-stage exchange arthroplasty TKA in more than ten patients with a minimum 2-year follow-up. From an initial sample of 796, five cohort studies with a total of 231 patients (46 single-stage/185 two-stage; median patient age 66 years, range 61-71 years) met inclusion criteria. Overall, there were no significant differences in risk of reinfection following one- or two-stage exchange arthroplasty (OR -0.06, 95 % confidence interval -0.13, 0.01). Subgroup analysis revealed that in studies published since 2000, one-stage procedures have a significantly lower reinfection rate. One study investigated functional outcomes and reported that one-stage surgery was associated with superior functional outcomes. Scarcity of data, inconsistent study designs, surgical technique and antibiotic regime disparities limit recommendations that can be made. Recent studies suggest one-stage exchange arthroplasty may provide superior outcomes, including lower reinfection rates and superior function, in select patients. Clinically, for some patients, one-stage exchange arthroplasty may represent optimum treatment; however, patient selection criteria and key components of surgical and post-operative anti-microbial management remain to be defined. III.
Methodological issues with adaptation of clinical trial design.
Hung, H M James; Wang, Sue-Jane; O'Neill, Robert T
2006-01-01
Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.
Paradigms for adaptive statistical information designs: practical experiences and strategies.
Wang, Sue-Jane; Hung, H M James; O'Neill, Robert
2012-11-10
In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information design. We highlight the substantial risk of planning the sample size for confirmatory trials when information is very uninformative and stipulate the advantages of adaptive statistical information designs for planning exploratory trials. Practical experiences and strategies as lessons learned from more recent adaptive design proposals will be discussed to pinpoint the improved utilities of adaptive design clinical trials and their potential to increase the chance of a successful drug development. Published 2012. This article is a US Government work and is in the public domain in the USA.
The design of two-stage-to-orbit vehicles
NASA Technical Reports Server (NTRS)
1991-01-01
Two separate student design groups developed conceptual designs for a two-stage-to-orbit vehicle, with each design group consisting of a carrier team and an orbiter team. A two-stage-to-orbit system is considered in the event that single-stage-to-orbit is deemed not feasible in the foreseeable future; the two-stage system would also be used as a complement to an already existing heavy lift vehicle. The design specifications given are to lift a 10,000-lb payload 27 ft long by 10 ft diameter, to low Earth orbit (300 n.m.) using an air breathing carrier configuration that will take off horizontally within 15,000 ft. The staging Mach number and altitude were to be determined by the design groups. One group designed a delta wing/body carrier with the orbiter nested within the fuselage of the carrier, and the other group produced a blended cranked-delta wing/body carrier with the orbiter in the more conventional piggyback configuration. Each carrier used liquid hydrogen-fueled turbofanramjet engines, with data provided by General Electric Aircraft Engine Group. While one orbiter used a full-scale Space Shuttle Main Engine (SSME), the other orbiter employed a half-scale SSME coupled with scramjet engines, with data again provided by General Electric. The two groups conceptual designs, along with the technical trade-offs, difficulties, and details that surfaced during the design process are presented.
Conceptual design of a two-stage-to-orbit vehicle
NASA Technical Reports Server (NTRS)
1991-01-01
A conceptual design study of a two-stage-to-orbit vehicle is presented. Three configurations were initially investigated with one configuration selected for further development. The major objective was to place a 20,000-lb payload into a low Earth orbit using a two-stage vehicle. The first stage used air-breathing engines and employed a horizontal takeoff, while the second stage used rocket engines to achieve a 250-n.m. orbit. A two-stage-to-orbit vehicle seems a viable option for the next-generation space shuttle.
NASA Astrophysics Data System (ADS)
Saeed, O.; Duru, L.; Yulin, D.
2018-05-01
A proposed microfluidic design has been fabricated and simulated using COMSOL Multiphysics software, based on two physical models included in this design. The device’s ability to create a narrow stream of the core sample by controlling the sheath flow rates Qs1 and Qs2 in both peripheral channels was investigated. The main target of this paper is to study the possibility of combing the hydrodynamic and magnetic techniques, in order to achieve a high rate of cancer cells separation from a cell mixture and/or buffer sample. The study has been conducted in two stages, firstly, the effects of the sheath flow rates (Qs1 and Qs2) on the sample stream focusing were studied, to find the proposed device effectiveness optimal conditions and its capability in cell focusing, and then the magnetic mechanism has been utilized to finalize the pre-labelled cells separation process.
The accuracy of the National Land Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or a...
NASA Technical Reports Server (NTRS)
Reid, L.; Moore, R. D.
1978-01-01
The detailed design and overall performances of four inlet stages for an advanced core compressor are presented. These four stages represent two levels of design total pressure ratio (1.82 and 2.05), two levels of rotor aspect ratio (1.19 and 1.63), and two levels of stator aspect ratio (1.26 and 1.78). The individual stages were tested over the stable operating flow range at 70, 90, and 100 percent of design speeds. The performances of the low aspect ratio configurations were substantially better than those of the high aspect ratio configurations. The two low aspect ratio configurations achieved peak efficiencies of 0.876 and 0.872 and corresponding stage efficiencies of 0.845 and 0.840. The high aspect ratio configurations achieved peak ratio efficiencies of 0.851 and 0.849 and corresponding stage efficiencies of 0.821 and 0.831.
NASA Astrophysics Data System (ADS)
Pankhurst, M. J.; Fowler, R.; Courtois, L.; Nonni, S.; Zuddas, F.; Atwood, R. C.; Davis, G. R.; Lee, P. D.
2018-01-01
We present new software allowing significantly improved quantitative mapping of the three-dimensional density distribution of objects using laboratory source polychromatic X-rays via a beam characterisation approach (c.f. filtering or comparison to phantoms). One key advantage is that a precise representation of the specimen material is not required. The method exploits well-established, widely available, non-destructive and increasingly accessible laboratory-source X-ray tomography. Beam characterisation is performed in two stages: (1) projection data are collected through a range of known materials utilising a novel hardware design integrated into the rotation stage; and (2) a Python code optimises a spectral response model of the system. We provide hardware designs for use with a rotation stage able to be tilted, yet the concept is easily adaptable to virtually any laboratory system and sample, and implicitly corrects the image artefact known as beam hardening.
A recourse-based solution approach to the design of fuel cell aeropropulsion systems
NASA Astrophysics Data System (ADS)
Choi, Taeyun Paul
In order to formulate a nondeterministic solution approach that capitalizes on the practice of compensatory design, this research introduces the notion of recourse. Within the context of engineering an aerospace system, recourse is defined as a set of corrective actions that can be implemented in stages later than the current design phase to keep critical system-level figures of merit within the desired target ranges, albeit at some penalty. Recourse programs also introduce the concept of stages to optimization formulations, and allow each stage to encompass as many sequences or events as determined necessary to solve the problem at hand. A two-part strategy, which partitions the design activities into stages, is proposed to model the bi-phasal nature of recourse. The first stage is defined as the time period in which an a priori design is identified before the exact values of the uncertain parameters are known. In contrast, the second stage is a period occurring some time after the first stage, when an a posteriori correction can be made to the first-stage design, should the realization of uncertainties impart infeasibilities. Penalizing costs are attached to the second-stage corrections to reflect the reality that getting it done right the first time is almost always less costly than fixing it after the fact. Consequently, the goal of the second stage becomes identifying an optimal solution with respect to the second-stage penalty, given the first-stage design, as well as a particular realization of the random parameters. This two-stage model is intended as an analogue of the traditional practice of monitoring and managing key Technical Performance Measures (TPMs) in aerospace systems development settings. One obvious weakness of the two-stage strategy as presented above is its limited applicability as a forecasting tool. Not only cannot the second stage be invoked without a first-stage starting point, but also the second-stage solution differs from one specific outcome of uncertainties to another. On the contrary, what would be more valuable given the time-phased nature of engineering design is the capability to perform an anticipatory identification of an optimum that is also expected to incur the least costly recourse option in the future. It is argued that such a solution is in fact a more balanced alternative than robust, probabilistically maximized, or chance-constrained solutions, because it represents trading the design optimality in the present with the potential costs of future recourse. Therefore, it is further proposed that the original two-stage model be embedded inside a larger design loop, so that the realization of numerous recourse scenarios can be simulated for a given first-stage design. The repetitive procedure at the second stage is necessary for computing the expected cost of recourse, which is equivalent to its mathematical expectation as per the strong law of large numbers. The feedback loop then communicates this information to the aggregate-level optimizer, whose objective is to minimize the sum total of the first-stage metric and the expected cost of future corrective actions. The resulting stochastic solution is a design that is well-hedged against the uncertain consequences of later design phases, while at the same time being less conservative than a solution designed to more traditional deterministic standards. As a proof-of-concept demonstration, the recourse-based solution approach is presented as applied to a contemporary aerospace engineering problem of interest - the integration of fuel cell technology into uninhabited aerial systems. The creation of a simulation environment capable of designing three system alternatives based on Proton Exchange Membrane Fuel Cell (PEMFC) technology and another three systems leveraging upon Solid Oxide Fuel Cell (SOFC) technology is presented as the means to notionally emulate the development process of this revolutionary aeropropulsion method. Notable findings from the deterministic trade studies and algorithmic investigation include the incompatibility of the SOFC based architectures with the conceived maritime border patrol mission, as well as the thermodynamic scalability of the PEMFC based alternatives. It is the latter finding which justifies the usage of the more practical specific-parameter based approach in synthesizing the design results at the propulsion level into the overall aircraft sizing framework. The ensuing presentation on the stochastic portion of the implementation outlines how the selective applications of certain Design of Experiments, constrained optimization, Surrogate Modeling, and Monte Carlo sampling techniques enable the visualization of the objective function space. The particular formulations of the design stages, recourse, and uncertainties proposed in this research are shown to result in solutions that are well compromised between unfounded optimism and unwarranted conservatism. In all stochastic optimization cases, the Value of Stochastic Solution (VSS) proves to be an intuitively appealing measure of accounting for recourse-causing uncertainties in an aerospace systems design environment. (Abstract shortened by UMI.)
Effect of two-stage sintering on dielectric properties of BaTi0.9Zr0.1O3 ceramics
NASA Astrophysics Data System (ADS)
Rani, Rekha; Rani, Renu; Kumar, Parveen; Juneja, J. K.; Raina, K. K.; Prakash, Chandra
2011-09-01
The effect of two-stage sintering on the dielectric properties of BaTi0.9Zr0.1O3 ceramics prepared by solid state route was investigated and is presented here. It has been found that under suitable two-stage sintering conditions, dense BaTi0.9Zr0.1O3 ceramics with improved electrical properties can be synthesized. The density was found to have a value of 5.49 g cc-1 for normally sintered samples, whereas in the case of the two-stage sintered sample it was 5.85 g cc-1. Dielectric measurements were done as a function of frequency and temperature. A small decrease in the Curie temperature was observed with modification in dielectric loss for two-stage sintered ceramic samples.
NASA Astrophysics Data System (ADS)
Sanz, Miguel; Ramos, Gonzalo; Moral, Andoni; Pérez, Carlos; Belenguer, Tomás; del Rosario Canchal, María; Zuluaga, Pablo; Rodriguez, Jose Antonio; Santiago, Amaia; Rull, Fernando; Instituto Nacional de Técnica Aeroespacial (INTA); Ingeniería de Sistemas para la Defesa de España S.A. (ISDEFE)
2016-10-01
Raman Laser Spectrometer (RLS) is the Pasteur Payload instruments of the ExoMars mission, within the ESA's Aurora Exploration Programme, that will perform for the first time in an out planetary mission Raman spectroscopy. RLS is composed by SPU (Spectrometer Unit), iOH (Internal Optical Head), and ICEU (Instrument Control and Excitation Unit). iOH focuses the excitation laser on the samples (excitation path), and collects the Raman emission from the sample (collection path, composed on collimation system and filtering system). The original design presented a high laser trace reaching to the detector, and although a certain level of laser trace was required for calibration purposes, the high level degrades the Signal to Noise Ratio confounding some Raman peaks.The investigation revealing that the laser trace was not properly filtered as well as the iOH opto-mechanical redesign are reported on. After the study of the Long Pass Filters Optical Density (OD) as a function of the filtering stage to the detector distance, a new set of filters (Notch filters) was decided to be evaluated. Finally, and in order to minimize the laser trace, a new collection path design (mainly consisting on that the collimation and filtering stages are now separated in two barrels, and on the kind of filters to be used) was required. Distance between filters and collimation stage first lens was increased, increasing the OD. With this new design and using two Notch filters, the laser trace was reduced to assumable values, as can be observed in the functional test comparison also reported on this paper.
Watershed-based survey designs
Detenbeck, N.E.; Cincotta, D.; Denver, J.M.; Greenlee, S.K.; Olsen, A.R.; Pitchford, A.M.
2005-01-01
Watershed-based sampling design and assessment tools help serve the multiple goals for water quality monitoring required under the Clean Water Act, including assessment of regional conditions to meet Section 305(b), identification of impaired water bodies or watersheds to meet Section 303(d), and development of empirical relationships between causes or sources of impairment and biological responses. Creation of GIS databases for hydrography, hydrologically corrected digital elevation models, and hydrologic derivatives such as watershed boundaries and upstream–downstream topology of subcatchments would provide a consistent seamless nationwide framework for these designs. The elements of a watershed-based sample framework can be represented either as a continuous infinite set defined by points along a linear stream network, or as a discrete set of watershed polygons. Watershed-based designs can be developed with existing probabilistic survey methods, including the use of unequal probability weighting, stratification, and two-stage frames for sampling. Case studies for monitoring of Atlantic Coastal Plain streams, West Virginia wadeable streams, and coastal Oregon streams illustrate three different approaches for selecting sites for watershed-based survey designs.
Nishida, Yoshifumi; Kobayashi, Hiromi; Nishida, Hideo; Sugimura, Kazuyuki
2013-05-01
The effect of the design parameters of a return channel on the performance of a multistage centrifugal compressor was numerically investigated, and the shape of the return channel was optimized using a multiobjective optimization method based on a genetic algorithm to improve the performance of the centrifugal compressor. The results of sensitivity analysis using Latin hypercube sampling suggested that the inlet-to-outlet area ratio of the return vane affected the total pressure loss in the return channel, and that the inlet-to-outlet radius ratio of the return vane affected the outlet flow angle from the return vane. Moreover, this analysis suggested that the number of return vanes affected both the loss and the flow angle at the outlet. As a result of optimization, the number of return vane was increased from 14 to 22 and the area ratio was decreased from 0.71 to 0.66. The radius ratio was also decreased from 2.1 to 2.0. Performance tests on a centrifugal compressor with two return channels (the original design and optimized design) were carried out using two-stage test apparatus. The measured flow distribution exhibited a swirl flow in the center region and a reversed swirl flow near the hub and shroud sides. The exit flow of the optimized design was more uniform than that of the original design. For the optimized design, the overall two-stage efficiency and pressure coefficient were increased by 0.7% and 1.5%, respectively. Moreover, the second-stage efficiency and pressure coefficient were respectively increased by 1.0% and 3.2%. It is considered that the increase in the second-stage efficiency was caused by the increased uniformity of the flow, and the rise in the pressure coefficient was caused by a decrease in the residual swirl flow. It was thus concluded from the numerical and experimental results that the optimized return channel improved the performance of the multistage centrifugal compressor.
Dose finding with the sequential parallel comparison design.
Wang, Jessie J; Ivanova, Anastasia
2014-01-01
The sequential parallel comparison design (SPCD) is a two-stage design recommended for trials with possibly high placebo response. A drug-placebo comparison in the first stage is followed in the second stage by placebo nonresponders being re-randomized between drug and placebo. We describe how SPCD can be used in trials where multiple doses of a drug or multiple treatments are compared with placebo and present two adaptive approaches. We detail how to analyze data in such trials and give recommendations about the allocation proportion to placebo in the two stages of SPCD.
Advanced two-stage compressor program design of inlet stage
NASA Technical Reports Server (NTRS)
Bryce, C. A.; Paine, C. J.; Mccutcheon, A. R. S.; Tu, R. K.; Perrone, G. L.
1973-01-01
The aerodynamic design of an inlet stage for a two-stage, 10/1 pressure ratio, 2 lb/sec flow rate compressor is discussed. Initially a performance comparison was conducted for an axial, mixed flow and centrifugal second stage. A modified mixed flow configuration with tandem rotors and tandem stators was selected for the inlet stage. The term conical flow compressor was coined to describe a particular type of mixed flow compressor configuration which utilizes axial flow type blading and an increase in radius to increase the work input potential. Design details of the conical flow compressor are described.
US forests are showing increased rates of decline in response to a changing climate
Warren B. Cohen; Zhiqiang Yang; David M. Bell; Stephen V. Stehman
2015-01-01
How vulnerable are US forest to a changing climate? We answer this question using Landsat time series data and a unique interpretation approach, TimeSync, a plot-based Landsat visualization and data collection tool. Original analyses were based on a stratified two-stage cluster sample design that included interpretation of 3858 forested plots. From these data, we...
The sampling design for the National Children¿s Study (NCS) calls for a population-based, multi-stage, clustered household sampling approach (visit our website for more information on the NCS : www.nationalchildrensstudy.gov). The full sample is designed to be representative of ...
Highly loaded multi-stage fan drive turbine-tandem blade configuration design
NASA Technical Reports Server (NTRS)
Evans, D. C.; Wolfmeyer, G. W.
1972-01-01
The results of the tandem blade configuration design study are reported. The three stage constant-inside-diameter turbine utilizes tandem blading in the stage two and stage three vanes and in the stage three blades. All other bladerows use plain blades. Blading detailed design is discussed, and design data are summarized. Steady-state stresses and vibratory behavior are discussed, and the results of the mechanical design analysis are presented.
THE NORTH CAROLINA HERALD PILOT STUDY
The sampling design for the National Children's Study (NCS) calls for a population-based, multi-stage, clustered household sampling approach. The full sample is designed to be representative of both urban and rural births in the United States, 2007-2011. While other sur...
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...
NASA Technical Reports Server (NTRS)
1991-01-01
A preliminary design of a two-stage to orbit vehicle was conducted with the requirements to carry a 10,000 pound payload into a 300 mile low-earth orbit using an airbreathing first stage, and to take off and land unassisted on a 15,000 foot runway. The goal of the design analysis was to produce the most efficient vehicle in size and weight which could accomplish the mission requirements. Initial parametric analysis indicated that the weight of the orbiter and the transonic performance of the system were the two parameters that had the largest impact on the design. The resulting system uses a turbofan ramjet powered first stage to propel a scramjet and rocket powered orbiter to the stage point of Mach 6 to 6.5 at an altitude of 90,000 ft.
NASA Astrophysics Data System (ADS)
Dhoble, Abhishek S.; Pullammanappallil, Pratap C.
2014-10-01
Waste treatment and management for manned long term exploratory missions to moon will be a challenge due to longer mission duration. The present study investigated appropriate digester technologies that could be used on the base. The effect of stirring, operation temperature, organic loading rate and reactor design on the methane production rate and methane yield was studied. For the same duration of digestion, the unmixed digester produced 20-50% more methane than mixed system. Two-stage design which separated the soluble components from the solids and treated them separately had more rapid kinetics than one stage system, producing the target methane potential in one-half the retention time than the one stage system. The two stage system degraded 6% more solids than the single stage system. The two stage design formed the basis of a prototype digester sized for a four-person crew during one year exploratory lunar mission.
Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.
Li, Xianwei; Gao, Huijun
2015-10-01
Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.
Sample Size Methods for Estimating HIV Incidence from Cross-Sectional Surveys
Brookmeyer, Ron
2015-01-01
Summary Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this paper we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this paper at the Biometrics website on Wiley Online Library. PMID:26302040
Sample size methods for estimating HIV incidence from cross-sectional surveys.
Konikoff, Jacob; Brookmeyer, Ron
2015-12-01
Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.
Automatic lung nodule graph cuts segmentation with deep learning false positive reduction
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Tseng, Tzu-Liang Bill; Qian, Wei
2017-03-01
To automatic detect lung nodules from CT images, we designed a two stage computer aided detection (CAD) system. The first stage is graph cuts segmentation to identify and segment the nodule candidates, and the second stage is convolutional neural network for false positive reduction. The dataset contains 595 CT cases randomly selected from Lung Image Database Consortium and Image Database Resource Initiative (LIDC/IDRI) and the 305 pulmonary nodules achieved diagnosis consensus by all four experienced radiologists were our detection targets. Consider each slice as an individual sample, 2844 nodules were included in our database. The graph cuts segmentation was conducted in a two-dimension manner, 2733 lung nodule ROIs are successfully identified and segmented. With a false positive reduction by a seven-layer convolutional neural network, 2535 nodules remain detected while the false positive dropped to 31.6%. The average F-measure of segmented lung nodule tissue is 0.8501.
Misra, S; Zhou, B B; Drozdov, I K; Seo, J; Urban, L; Gyenis, A; Kingsley, S C J; Jones, H; Yazdani, A
2013-10-01
We describe the construction and performance of a scanning tunneling microscope capable of taking maps of the tunneling density of states with sub-atomic spatial resolution at dilution refrigerator temperatures and high (14 T) magnetic fields. The fully ultra-high vacuum system features visual access to a two-sample microscope stage at the end of a bottom-loading dilution refrigerator, which facilitates the transfer of in situ prepared tips and samples. The two-sample stage enables location of the best area of the sample under study and extends the experiment lifetime. The successful thermal anchoring of the microscope, described in detail, is confirmed through a base temperature reading of 20 mK, along with a measured electron temperature of 250 mK. Atomically resolved images, along with complementary vibration measurements, are presented to confirm the effectiveness of the vibration isolation scheme in this instrument. Finally, we demonstrate that the microscope is capable of the same level of performance as typical machines with more modest refrigeration by measuring spectroscopic maps at base temperature both at zero field and in an applied magnetic field.
A gas-loading system for LANL two-stage gas guns
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, Lloyd Lee; Bartram, Brian Douglas; Dattelbaum, Dana Mcgraw
A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures.The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design andmore » evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.« less
A gas-loading system for LANL two-stage gas guns
NASA Astrophysics Data System (ADS)
Gibson, L. L.; Bartram, B. D.; Dattelbaum, D. M.; Lang, J. M.; Morris, J. S.
2017-01-01
A novel gas loading system was designed for the specific application of remotely loading high purity gases into targets for gas-gun driven plate impact experiments. The high purity gases are loaded into well-defined target configurations to obtain Hugoniot states in the gas phase at greater than ambient pressures. The small volume of the gas samples is challenging, as slight changing in the ambient temperature result in measurable pressure changes. Therefore, the ability to load a gas gun target and continually monitor the sample pressure prior to firing provides the most stable and reliable target fielding approach. We present the design and evaluation of a gas loading system built for the LANL 50 mm bore two-stage light gas gun. Targets for the gun are made of 6061 Al or OFHC Cu, and assembled to form a gas containment cell with a volume of approximately 1.38 cc. The compatibility of materials was a major consideration in the design of the system, particularly for its use with corrosive gases. Piping and valves are stainless steel with wetted seals made from Kalrez® and Teflon®. Preliminary testing was completed to ensure proper flow rate and that the proper safety controls were in place. The system has been used to successfully load Ar, Kr, Xe, and anhydrous ammonia with purities of up to 99.999 percent. The design of the system and example data from the plate impact experiments will be shown.
Mars Relays Satellite Orbit Design Considerations for Global Support of Robotic Surface Missions
NASA Technical Reports Server (NTRS)
Hastrup, Rolf; Cesarone, Robert; Cook, Richard; Knocke, Phillip; McOmber, Robert
1993-01-01
This paper discusses orbit design considerations for Mars relay satellite (MRS)support of globally distributed robotic surface missions. The orbit results reported in this paper are derived from studies of MRS support for two types of Mars robotic surface missions: 1) the mars Environmental Survey (MESUR) mission, which in its current definition would deploy a global network of up to 16 small landers, and 2)a Small Mars Sample Return (SMSR) mission, which included four globally distributed landers, each with a return stage and one or two rovers, and up to four additional sets of lander/rover elements in an extended mission phase.
Conceptual design of a two stage to orbit spacecraft
NASA Technical Reports Server (NTRS)
Armiger, Scott C.; Kwarta, Jennifer S.; Horsley, Kevin B.; Snow, Glenn A.; Koe, Eric C.; Single, Thomas G.
1993-01-01
This project, undertaken through the Advanced Space Design Program, developed a 'Conceptual Design of a Two Stage To Orbit Spacecraft (TSTO).' The design developed utilizes a combination of air breathing and rocket propulsion systems and is fully reusable, with horizontal takeoff and landing capability. The orbiter is carried in an aerodynamically designed bay in the aft section of the booster vehicle to the staging altitude. This TSTO Spacecraft design meets the requirements of replacing the aging Space Shuttle system with a more easily maintained vehicle with more flexible mission capability.
Design and construction of the X-2 two-stage free piston driven expansion tube
NASA Technical Reports Server (NTRS)
Doolan, Con
1995-01-01
This report outlines the design and construction of the X-2 two-stage free piston driven expansion tube. The project has completed its construction phase and the facility has been installed in the new impulsive research laboratory where commissioning is about to take place. The X-2 uses a unique, two-stage driver design which allows a more compact and lower overall cost free piston compressor. The new facility has been constructed in order to examine the performance envelope of the two-stage driver and how well it couple to sub-orbital and super-orbital expansion tubes. Data obtained from these experiments will be used for the design of a much larger facility, X-3, utilizing the same free piston driver concept.
Highly loaded multi-stage fan drive turbine - performance of initial seven configurations
NASA Technical Reports Server (NTRS)
Wolfmeyer, G. W.; Thomas, M. W.
1974-01-01
Experimental results of a three-stage highly loaded fan drive turbine test program are presented. A plain blade turbine, a tandem blade turbine, and a tangentially leaned stator turbine were designed for the same velocity diagram and flowpath. Seven combinations of bladerows were tested to evaluate stage performances and effects of the tandem blading and leaned stator. The plain blade turbine design point total-to-total efficiency was 0.886. The turbine with the stage three leaned stator had the same efficiency with an improved exit swirl profile and increased hub reaction. Two-stage group tests showed that the two-stage turbine with tandem stage two stator had an efficiency of 0.880 compared to 0.868 for the plain blade two-stage turbine.
Reliability based design including future tests and multiagent approaches
NASA Astrophysics Data System (ADS)
Villanueva, Diane
The initial stages of reliability-based design optimization involve the formulation of objective functions and constraints, and building a model to estimate the reliability of the design with quantified uncertainties. However, even experienced hands often overlook important objective functions and constraints that affect the design. In addition, uncertainty reduction measures, such as tests and redesign, are often not considered in reliability calculations during the initial stages. This research considers two areas that concern the design of engineering systems: 1) the trade-off of the effect of a test and post-test redesign on reliability and cost and 2) the search for multiple candidate designs as insurance against unforeseen faults in some designs. In this research, a methodology was developed to estimate the effect of a single future test and post-test redesign on reliability and cost. The methodology uses assumed distributions of computational and experimental errors with re-design rules to simulate alternative future test and redesign outcomes to form a probabilistic estimate of the reliability and cost for a given design. Further, it was explored how modeling a future test and redesign provides a company an opportunity to balance development costs versus performance by simultaneously designing the design and the post-test redesign rules during the initial design stage. The second area of this research considers the use of dynamic local surrogates, or surrogate-based agents, to locate multiple candidate designs. Surrogate-based global optimization algorithms often require search in multiple candidate regions of design space, expending most of the computation needed to define multiple alternate designs. Thus, focusing on solely locating the best design may be wasteful. We extended adaptive sampling surrogate techniques to locate multiple optima by building local surrogates in sub-regions of the design space to identify optima. The efficiency of this method was studied, and the method was compared to other surrogate-based optimization methods that aim to locate the global optimum using two two-dimensional test functions, a six-dimensional test function, and a five-dimensional engineering example.
A Bayesian pick-the-winner design in a randomized phase II clinical trial.
Chen, Dung-Tsa; Huang, Po-Yu; Lin, Hui-Yi; Chiappori, Alberto A; Gabrilovich, Dmitry I; Haura, Eric B; Antonia, Scott J; Gray, Jhanelle E
2017-10-24
Many phase II clinical trials evaluate unique experimental drugs/combinations through multi-arm design to expedite the screening process (early termination of ineffective drugs) and to identify the most effective drug (pick the winner) to warrant a phase III trial. Various statistical approaches have been developed for the pick-the-winner design but have been criticized for lack of objective comparison among the drug agents. We developed a Bayesian pick-the-winner design by integrating a Bayesian posterior probability with Simon two-stage design in a randomized two-arm clinical trial. The Bayesian posterior probability, as the rule to pick the winner, is defined as probability of the response rate in one arm higher than in the other arm. The posterior probability aims to determine the winner when both arms pass the second stage of the Simon two-stage design. When both arms are competitive (i.e., both passing the second stage), the Bayesian posterior probability performs better to correctly identify the winner compared with the Fisher exact test in the simulation study. In comparison to a standard two-arm randomized design, the Bayesian pick-the-winner design has a higher power to determine a clear winner. In application to two studies, the approach is able to perform statistical comparison of two treatment arms and provides a winner probability (Bayesian posterior probability) to statistically justify the winning arm. We developed an integrated design that utilizes Bayesian posterior probability, Simon two-stage design, and randomization into a unique setting. It gives objective comparisons between the arms to determine the winner.
Zhu, Yunzeng; Chen, Yiqi; Meng, Xiangrui; Wang, Jing; Lu, Ying; Xu, Youchun; Cheng, Jing
2017-09-05
Centrifugal microfluidics has been widely applied in the sample-in-answer-out systems for the analyses of nucleic acids, proteins, and small molecules. However, the inherent characteristic of unidirectional fluid propulsion limits the flexibility of these fluidic chips. Providing an extra degree of freedom to allow the unconstrained and reversible pumping of liquid is an effective strategy to address this limitation. In this study, a wirelessly charged centrifugal microfluidic platform with two rotation axes has been constructed and the flow control strategy in such platform with two degrees of freedom was comprehensively studied for the first time. Inductively coupled coils are installed on the platform to achieve wireless power transfer to the spinning stage. A micro servo motor is mounted on both sides of the stage to alter the orientation of the device around a secondary rotation axis on demand during stage rotation. The basic liquid operations on this platform, including directional transport of liquid, valving, metering, and mixing, are comprehensively studied and realized. Finally, a chip for the simultaneous determination of hexavalent chromium [Cr(VI)] and methanal in water samples is designed and tested based on the strategy presented in this paper, demonstrating the potential use of this platform for on-site environmental monitoring, food safety testing, and other life science applications.
Booster propulsion/vehicle impact study
NASA Technical Reports Server (NTRS)
Weldon, Vincent; Dunn, Michael; Fink, Lawrence; Phillips, Dwight; Wetzel, Eric
1988-01-01
The use of hydrogen RP-1, propane, and methane as fuels for booster engines of launch vehicles is discussed. An automated procedure for integrated launch vehicle, engine sizing, and design optimization was used to define two stage and single stage concepts for minimum dry weight. The two stage vehicles were unmanned and used a flyback booster and partially reusable orbiter. The single stage designs were fully reusable, manned flyback vehicles. Comparisons of these vehicle designs, showing the effects of using different fuels, as well as sensitivity and trending data, are presented. In addition, the automated design technique utilized for the study is described.
ERIC Educational Resources Information Center
Jenkins, Peter; Palmer, Joanne
2012-01-01
The primary objective of this study was to explore perceptions of UK school counsellors of confidentiality and information sharing in therapeutic work with children and young people, using qualitative methods. The research design employed a two-stage process, using questionnaires and follow-up interviews, with a small, non-random sample of school…
Melvin, Neal R; Poda, Daniel; Sutherland, Robert J
2007-10-01
When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.
Efficient logistic regression designs under an imperfect population identifier.
Albert, Paul S; Liu, Aiyi; Nansel, Tonja
2014-03-01
Motivated by actual study designs, this article considers efficient logistic regression designs where the population is identified with a binary test that is subject to diagnostic error. We consider the case where the imperfect test is obtained on all participants, while the gold standard test is measured on a small chosen subsample. Under maximum-likelihood estimation, we evaluate the optimal design in terms of sample selection as well as verification. We show that there may be substantial efficiency gains by choosing a small percentage of individuals who test negative on the imperfect test for inclusion in the sample (e.g., verifying 90% test-positive cases). We also show that a two-stage design may be a good practical alternative to a fixed design in some situations. Under optimal and nearly optimal designs, we compare maximum-likelihood and semi-parametric efficient estimators under correct and misspecified models with simulations. The methodology is illustrated with an analysis from a diabetes behavioral intervention trial. © 2013, The International Biometric Society.
Design and simulation study of the immunization Data Quality Audit (DQA).
Woodard, Stacy; Archer, Linda; Zell, Elizabeth; Ronveaux, Olivier; Birmingham, Maureen
2007-08-01
The goal of the Data Quality Audit (DQA) is to assess whether the Global Alliance for Vaccines and Immunization-funded countries are adequately reporting the number of diphtheria-tetanus-pertussis immunizations given, on which the "shares" are awarded. Given that this sampling design is a modified two-stage cluster sample (modified because a stratified, rather than a simple, random sample of health facilities is obtained from the selected clusters); the formula for the calculation of the standard error for the estimate is unknown. An approximated standard error has been proposed, and the first goal of this simulation is to assess the accuracy of the standard error. Results from the simulations based on hypothetical populations were found not to be representative of the actual DQAs that were conducted. Additional simulations were then conducted on the actual DQA data to better access the precision of the DQ with both the original and the increased sample sizes.
Impact of ETO propellants on the aerothermodynamic analyses of propulsion components
NASA Technical Reports Server (NTRS)
Civinskas, K. C.; Boyle, R. J.; Mcconnaughey, H. V.
1988-01-01
The operating conditions and the propellant transport properties used in Earth-to-Orbit (ETO) applications affect the aerothermodynamic design of ETO turbomachinery in a number of ways. Some aerodynamic and heat transfer implications of the low molecular weight fluids and high Reynolds number operating conditions on future ETO turbomachinery are discussed. Using the current SSME high pressure fuel turbine as a baseline, the aerothermodynamic comparisons are made for two alternate fuel turbine geometries. The first is a revised first stage rotor blade designed to reduce peak heat transfer. This alternate design resulted in a 23 percent reduction in peak heat transfer. The second design concept was a single stage rotor to yield the same power output as the baseline two stage rotor. Since the rotor tip speed was held constant, the turbine work factor doubled. In this alternate design, the peak heat transfer remained the same as the baseline. While the efficiency of the single stage design was 3.1 points less than the baseline two stage turbine, the design was aerothermodynamically feasible, and may be structurally desirable.
Two-stage solar concentrators based on parabolic troughs: asymmetric versus symmetric designs.
Schmitz, Max; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo
2015-11-20
While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.
Low- Z polymer sample supports for fixed-target serial femtosecond X-ray crystallography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feld, Geoffrey K.; Heymann, Michael; Benner, W. Henry
X-ray free-electron lasers (XFELs) offer a new avenue to the structural probing of complex materials, including biomolecules. Delivery of precious sample to the XFEL beam is a key consideration, as the sample of interest must be serially replaced after each destructive pulse. The fixed-target approach to sample delivery involves depositing samples on a thin-film support and subsequent serial introduction via a translating stage. Some classes of biological materials, including two-dimensional protein crystals, must be introduced on fixed-target supports, as they require a flat surface to prevent sample wrinkling. A series of wafer and transmission electron microscopy (TEM)-style grid supports constructedmore » of low- Z plastic have been custom-designed and produced. Aluminium TEM grid holders were engineered, capable of delivering up to 20 different conventional or plastic TEM grids using fixed-target stages available at the Linac Coherent Light Source (LCLS). As proof-of-principle, X-ray diffraction has been demonstrated from two-dimensional crystals of bacteriorhodopsin and three-dimensional crystals of anthrax toxin protective antigen mounted on these supports at the LCLS. In conclusion, the benefits and limitations of these low- Z fixed-target supports are discussed; it is the authors' belief that they represent a viable and efficient alternative to previously reported fixed-target supports for conducting diffraction studies with XFELs.« less
Overall, John E; Tonidandel, Scott; Starbuck, Robert R
2006-01-01
Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.
Payne, G.A.
1983-01-01
Streamflow and suspended-sediment-transport data were collected in Garvin Brook watershed in Winona County, southeastern Minnesota, during 1982. The data collection was part of a study to determine the effectiveness of agricultural best-management practices designed to improve rural water quality. The study is part of a Rural Clean Water Program demonstration project undertaken by the U.S. Department of Agriculture. Continuous streamflow data were collected at three gaging stations during March through September 1982. Suspended-sediment samples were collected at two of the gaging stations. Samples were collected manually at weekly intervals. During periods of rapidly changing stage, samples were collected at 30-minute to 12-hour intervals by stage-activated automatic samplers. The samples were analyzed for suspendedsediment concentration and particle-size distribution. Particlesize distributions were also determined for one set of bedmaterial samples collected at each sediment-sampling site. The streamflow and suspended-sediment-concentration data were used to compute records of mean-daily flow, mean-daily suspended-sediment concentration, and daily suspended-sediment discharge. The daily records are documented and results of analyses for particle-size distribution and of vertical sampling in the stream cross sections are given.
Kent, Peter; Stochkendahl, Mette Jensen; Christensen, Henrik Wulff; Kongsted, Alice
2015-01-01
Recognition of homogeneous subgroups of patients can usefully improve prediction of their outcomes and the targeting of treatment. There are a number of research approaches that have been used to recognise homogeneity in such subgroups and to test their implications. One approach is to use statistical clustering techniques, such as Cluster Analysis or Latent Class Analysis, to detect latent relationships between patient characteristics. Influential patient characteristics can come from diverse domains of health, such as pain, activity limitation, physical impairment, social role participation, psychological factors, biomarkers and imaging. However, such 'whole person' research may result in data-driven subgroups that are complex, difficult to interpret and challenging to recognise clinically. This paper describes a novel approach to applying statistical clustering techniques that may improve the clinical interpretability of derived subgroups and reduce sample size requirements. This approach involves clustering in two sequential stages. The first stage involves clustering within health domains and therefore requires creating as many clustering models as there are health domains in the available data. This first stage produces scoring patterns within each domain. The second stage involves clustering using the scoring patterns from each health domain (from the first stage) to identify subgroups across all domains. We illustrate this using chest pain data from the baseline presentation of 580 patients. The new two-stage clustering resulted in two subgroups that approximated the classic textbook descriptions of musculoskeletal chest pain and atypical angina chest pain. The traditional single-stage clustering resulted in five clusters that were also clinically recognisable but displayed less distinct differences. In this paper, a new approach to using clustering techniques to identify clinically useful subgroups of patients is suggested. Research designs, statistical methods and outcome metrics suitable for performing that testing are also described. This approach has potential benefits but requires broad testing, in multiple patient samples, to determine its clinical value. The usefulness of the approach is likely to be context-specific, depending on the characteristics of the available data and the research question being asked of it.
A common variant in DRD3 receptor is associated with autism spectrum disorder.
de Krom, Mariken; Staal, Wouter G; Ophoff, Roel A; Hendriks, Judith; Buitelaar, Jan; Franke, Barbara; de Jonge, Maretha V; Bolton, Patrick; Collier, David; Curran, Sarah; van Engeland, Herman; van Ree, Jan M
2009-04-01
The presence of specific and common genetic etiologies for autism spectrum disorders (ASD) and attention-deficit/hyperactivity disorder (ADHD) was investigated for 132 candidate genes in a two-stage design-association study. 1,536 single nucleotide polymorphisms (SNPs) covering these candidate genes were tested in ASD (n = 144) and ADHD (n = 110) patients and control subjects (n = 404) from The Netherlands. A second stage was performed with those SNPs from Stage I reaching a significance threshold for association of p < .01 in an independent sample of ASD patients (n = 128) and controls (n = 124) from the United Kingdom and a Dutch ADHD (n = 150) and control (n = 149) sample. No shared association was found between ASD and ADHD. However, in the first and second ASD samples and in a joint statistical analysis, a significant association between SNP rs167771 located in the DRD3 gene was found (joint analysis uncorrected: p = 3.11 x 10(-6); corrected for multiple testing and potential stratification: p = .00162). The DRD3 gene is related to stereotyped behavior, liability to side effects of antipsychotic medication, and movement disorders and may therefore have important clinical implications for ASD.
Investigation of modification design of the fan stage in axial compressor
NASA Astrophysics Data System (ADS)
Zhou, Xun; Yan, Peigang; Han, Wanjin
2010-04-01
The S2 flow path design method of the transonic compressor is used to design the one stage fan in order to replace the original designed blade cascade which has two-stage transonic fan rotors. In the modification design, the camber line is parameterized by a quartic polynomial curve and the thickness distribution of the blade profile is controlled by the double-thrice polynomial. Therefore, the inlet flow has been pre-compressed and the location and intensity of the shock wave at supersonic area have been controlled in order to let the new blade profiles have better aerodynamic performance. The computational results show that the new single stage fan rotor increases the efficiency by two percent at the design condition and the total pressure ratio is slightly higher than that of the original design. At the same time, it also meets the mass flow rate and the geometrical size requirements for the modification design.
Emery, Sherry; Lee, Jungwha; Curry, Susan J; Johnson, Tim; Sporer, Amy K; Mermelstein, Robin; Flay, Brian; Warnecke, Richard
2010-02-01
Surveys of community-based programs are difficult to conduct when there is virtually no information about the number or locations of the programs of interest. This article describes the methodology used by the Helping Young Smokers Quit (HYSQ) initiative to identify and profile community-based youth smoking cessation programs in the absence of a defined sample frame. We developed a two-stage sampling design, with counties as the first-stage probability sampling units. The second stage used snowball sampling to saturation, to identify individuals who administered youth smoking cessation programs across three economic sectors in each county. Multivariate analyses modeled the relationship between program screening, eligibility, and response rates and economic sector and stratification criteria. Cumulative logit models analyzed the relationship between the number of contacts in a county and the number of programs screened, eligible, or profiled in a county. The snowball process yielded 9,983 unique and traceable contacts. Urban and high-income counties yielded significantly more screened program administrators; urban counties produced significantly more eligible programs, but there was no significant association between the county characteristics and program response rate. There is a positive relationship between the number of informants initially located and the number of programs screened, eligible, and profiled in a county. Our strategy to identify youth tobacco cessation programs could be used to create a sample frame for other nonprofit organizations that are difficult to identify due to a lack of existing directories, lists, or other traditional sample frames.
NASA Technical Reports Server (NTRS)
Miser, James W; Stewart, Warner L
1957-01-01
A blade design study is presented for a two-stage air-cooled turbine suitable for flight at a Mach number of 2.5 for which velocity diagrams have been previously obtained. The detailed procedure used in the design of the blades is given. In addition, the design blade shapes, surface velocity distributions, inner and outer wall contours, and other design data are presented. Of all the blade rows, the first-stage rotor has the highest solidity, with a value of 2.289 at the mean section. The second-stage stator also had a high mean-section solidity of 1.927, mainly because of its high inlet whirl. The second-stage rotor has the highest value of the suction-surface diffusion parameter, with a value of 0.151. All other blade rows have values for this parameter under 0.100.
A stratified two-stage sampling design for digital soil mapping in a Mediterranean basin
NASA Astrophysics Data System (ADS)
Blaschek, Michael; Duttmann, Rainer
2015-04-01
The quality of environmental modelling results often depends on reliable soil information. In order to obtain soil data in an efficient manner, several sampling strategies are at hand depending on the level of prior knowledge and the overall objective of the planned survey. This study focuses on the collection of soil samples considering available continuous secondary information in an undulating, 16 km²-sized river catchment near Ussana in southern Sardinia (Italy). A design-based, stratified, two-stage sampling design has been applied aiming at the spatial prediction of soil property values at individual locations. The stratification based on quantiles from density functions of two land-surface parameters - topographic wetness index and potential incoming solar radiation - derived from a digital elevation model. Combined with four main geological units, the applied procedure led to 30 different classes in the given test site. Up to six polygons of each available class were selected randomly excluding those areas smaller than 1ha to avoid incorrect location of the points in the field. Further exclusion rules were applied before polygon selection masking out roads and buildings using a 20m buffer. The selection procedure was repeated ten times and the set of polygons with the best geographical spread were chosen. Finally, exact point locations were selected randomly from inside the chosen polygon features. A second selection based on the same stratification and following the same methodology (selecting one polygon instead of six) was made in order to create an appropriate validation set. Supplementary samples were obtained during a second survey focusing on polygons that have either not been considered during the first phase at all or were not adequately represented with respect to feature size. In total, both field campaigns produced an interpolation set of 156 samples and a validation set of 41 points. The selection of sample point locations has been done using ESRI software (ArcGIS) extended by Hawth's Tools and later on its replacement the Geospatial Modelling Environment (GME). 88% of all desired points could actually be reached in the field and have been successfully sampled. Our results indicate that the sampled calibration and validation sets are representative for each other and could be successfully used as interpolation data for spatial prediction purposes. With respect to soil textural fractions, for instance, equal multivariate means and variance homogeneity were found for the two datasets as evidenced by significant (P > 0.05) Hotelling T²-test (2.3 with df1 = 3, df2 = 193) and Bartlett's test statistics (6.4 with df = 6). The multivariate prediction of clay, silt and sand content using a neural network residual cokriging approach reached an explained variance level of 56%, 47% and 63%. Thus, the presented case study is a successful example of considering readily available continuous information on soil forming factors such as geology and relief as stratifying variables for designing sampling schemes in digital soil mapping projects.
DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Bo; Rice, C Keith; Abdelaziz, Omar
2015-01-01
This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.
Factorial versus multi-arm multi-stage designs for clinical trials with multiple treatments.
Jaki, Thomas; Vasileiou, Despina
2017-02-20
When several treatments are available for evaluation in a clinical trial, different design options are available. We compare multi-arm multi-stage with factorial designs, and in particular, we will consider a 2 × 2 factorial design, where groups of patients will either take treatments A, B, both or neither. We investigate the performance and characteristics of both types of designs under different scenarios and compare them using both theory and simulations. For the factorial designs, we construct appropriate test statistics to test the hypothesis of no treatment effect against the control group with overall control of the type I error. We study the effect of the choice of the allocation ratios on the critical value and sample size requirements for a target power. We also study how the possibility of an interaction between the two treatments A and B affects type I and type II errors when testing for significance of each of the treatment effects. We present both simulation results and a case study on an osteoarthritis clinical trial. We discover that in an optimal factorial design in terms of minimising the associated critical value, the corresponding allocation ratios differ substantially to those of a balanced design. We also find evidence of potentially big losses in power in factorial designs for moderate deviations from the study design assumptions and little gain compared with multi-arm multi-stage designs when the assumptions hold. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Batistatou, Evridiki; McNamee, Roseanne
2012-12-10
It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Brent, J. A.; Cheatham, J. G.; Nilsen, A. W.
1972-01-01
A conventional rotor and stator, two dual-airfoil tandem rotors, and one dual-airfoil tandem stator were designed. The two tandem rotors were each designed with different percentages of the overall lift produced by the front airfoil. Velocity diagrams and blade leading and trailing edge metal angles selected for the conventional rotor and stator blading were used in the design of the tandem blading. Rotor inlet hub/tip ratio was 0.8. Design values of rotor tip velocity and stage pressure ratio were 757 ft/sec and 1.30, respectively.
Isolating Gas Sensor From Pressure And Temperature Effects
NASA Technical Reports Server (NTRS)
Sprinkle, Danny R.; Chen, Tony T. D.; Chaturvedi, Sushi K.
1994-01-01
Two-stage flow system enables oxygen sensor in system to measure oxygen content of low-pressure, possibly-high-temperature atmosphere in test environment while protecting sensor against possibly high temperature and fluctuations in pressure of atmosphere. Sensor for which flow system designed is zirconium oxide oxygen sensor sampling atmospheres in high-temperature wind tunnels. Also adapted to other gas-analysis instruments that must be isolated from pressure and temperature effects of test environments.
A sampling design framework for monitoring secretive marshbirds
Johnson, D.H.; Gibbs, J.P.; Herzog, M.; Lor, S.; Niemuth, N.D.; Ribic, C.A.; Seamans, M.; Shaffer, T.L.; Shriver, W.G.; Stehman, S.V.; Thompson, W.L.
2009-01-01
A framework for a sampling plan for monitoring marshbird populations in the contiguous 48 states is proposed here. The sampling universe is the breeding habitat (i.e. wetlands) potentially used by marshbirds. Selection protocols would be implemented within each of large geographical strata, such as Bird Conservation Regions. Site selection will be done using a two-stage cluster sample. Primary sampling units (PSUs) would be land areas, such as legal townships, and would be selected by a procedure such as systematic sampling. Secondary sampling units (SSUs) will be wetlands or portions of wetlands in the PSUs. SSUs will be selected by a randomized spatially balanced procedure. For analysis, the use of a variety of methods as a means of increasing confidence in conclusions that may be reached is encouraged. Additional effort will be required to work out details and implement the plan.
Incorporating lower grade toxicity information into dose finding designs
Iasonos, Alexia; Zohar, Sarah; O’Quigley, John
2012-01-01
Background Toxicity grades underlie the definition of a dose limiting toxicity (DLT) but in the majority of phase I designs, the information contained in the individual grades is not used. Some authors have argued that it may be more appropriate to consider a polytomous rather than dichotomous response. Purpose We investigate whether the added information on individual grades can improve the operating characteristics of the Continual Reassessment Method (CRM). Methods We compare the original CRM design for a binary response with two stage CRM designs which make di erent use of lower-grade toxicity information via simulations. Specifically we study; a two-stage design that utilizes lower-grade toxicities in the first stage only, during the initial non model-based escalation, and two-stage designs where lower grades are used throughout the trial via explicit models. We postulate a model relating the rates of lower grade toxicities to the rate of DLTs, or assume the relative rates of low to high grade toxicities is unknown. The designs were compared in terms of accuracy, patient allocation and precision. Results Significant gains can be achieved when using grades in the first stage of a two-stage design. Otherwise, only modest improvements are seen when the information on grades is exploited via the use of explicit models, where the parameters are known precisely. CRM with some use of grade information, increases the number of patients treated at the MTD by approximately 5%. The additional information from lower grades can lead to a small increase in the precision of our estimate of the MTD. Limitations Our comparisons are not exhaustive and it would be worth studying other models and situations. Conclusions Although, the gains in performance were not as great as we had hoped, we observed no cases where the performance of CRM was poorer. Our recommendation is that investigators might consider using graded toxicities at the design stage. PMID:21835856
Wang, Jin-song; Cao, Pin-lu; Yin, Kun
2015-07-01
Environmental, economical and efficient antifoaming technology is the basis for achievement of foam drilling fluid recycling. The present study designed a novel two-stage laval mechanical foam breaker that primarily uses vacuum generated by Coanda effect and Laval principle to break foam. Numerical simulation results showed that the value and distribution of negative pressure of two-stage laval foam breaker were larger than that of the normal foam breaker. Experimental results showed that foam-breaking efficiency of two-stage laval foam breaker was higher than that of normal foam breaker, when gas-to-liquid ratio and liquid flow rate changed. The foam-breaking efficiency of normal foam breaker decreased rapidly with increasing foam stability, whereas the two-stage laval foam breaker remained unchanged. Foam base fluid would be recycled using two-stage laval foam breaker, which would reduce the foam drilling cost sharply and waste disposals that adverse by affect the environment.
CALiPER Report 20.3: Robustness of LED PAR38 Lamps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poplawski, Michael E.; Royer, Michael P.; Brown, Charles C.
2014-12-01
Three samples of 40 of the Series 20 PAR38 lamps underwent multi-stress testing, whereby samples were subjected to increasing levels of simultaneous thermal, humidity, electrical, and vibrational stress. The results do not explicitly predict expected lifetime or reliability, but they can be compared with one another, as well as with benchmark conventional products, to assess the relative robustness of the product designs. On average, the 32 LED lamp models tested were substantially more robust than the conventional benchmark lamps. As with other performance attributes, however, there was great variability in the robustness and design maturity of the LED lamps. Severalmore » LED lamp samples failed within the first one or two levels of the ten-level stress plan, while all three samples of some lamp models completed all ten levels. One potential area of improvement is design maturity, given that more than 25% of the lamp models demonstrated a difference in failure level for the three samples that was greater than or equal to the maximum for the benchmarks. At the same time, the fact that nearly 75% of the lamp models exhibited better design maturity than the benchmarks is noteworthy, given the relative stage of development for the technology.« less
Altus I aircraft taking off from lakebed runway
NASA Technical Reports Server (NTRS)
1997-01-01
The remotely-piloted Altus I aircraft takes off from Rogers Dry Lake adjacent to NASA's Dryden Flight Research Center, Edwards, Calif. The short series of test flights sponsored by the Naval Postgraduate School in early August, 1997, were designed to demonstrate the ability of the experimental craft to cruise at altitudes above 40,000 feet for sustained durations. On its final flight Aug. 15, the Altus I reached an altitude of 43,500 feet. The Altus I and its sister ship, the Altus II, are variants of the Predator surveillance drone built by General Atomics/Aeronautical Systems, Inc. They are designed for high-altitude, long-duration scientific sampling missions, and are powered by turbocharged piston engines. The Altus I incorporates a single-stage turbocharger, while the Altus II, built for NASA's Environmental Research Aircraft and Sensor Technology program, sports a two-stage turbocharger to enable the craft to fly at altitudes above 55,000 feet.
Altus I aircraft in flight, retracting landing gear after takeoff
NASA Technical Reports Server (NTRS)
1997-01-01
The landing gear of the remotely piloted Altus I aircraft retracts into the fuselage after takeoff from Rogers Dry Lake adjacent to NASA's Dryden Flight Research Center, Edwards, Calif. The short series of test flights sponsored by the Naval Postgraduate School in early August, 1997, was designed to demonstrate the ability of the experimental craft to cruise at altitudes above 40,000 feet for sustained durations. On its final flight Aug. 15, the Altus I reached an altitude of 43,500 feet. The Altus I and its sister ship, the Altus II, are variants of the Predator surveillance drone built by General Atomics/Aeronautical Systems, Inc. They are designed for high-altitude, long-duration scientific sampling missions. The Altus I incorporates a single-stage turbocharger, while the Altus II, built for NASA's Environmental Research Aircraft and Sensor Technology project, sports a two-stage turbocharger to enable the craft to fly at altitudes above 55,000 feet.
Two-Stage Series-Resonant Inverter
NASA Technical Reports Server (NTRS)
Stuart, Thomas A.
1994-01-01
Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.
Baiyewu, Olusegun; Smith-Gamble, Valerie; Lane, Kathleen A; Gureje, Oye; Gao, Sujuan; Ogunniyi, Adesola; Unverzagt, Frederick W; Hall, Kathleen S; Hendrie, Hugh C
2007-08-01
This is a community-based longitudinal epidemiological comparative study of elderly African Americans in Indianapolis and elderly Yoruba in Ibadan, Nigeria. A two-stage study was designed in which community-based individuals were first screened using the Community Screening Interview for Dementia. The second stage was a full clinical assessment, which included use of the Geriatric Depression Scale, of a smaller sub-sample of individuals selected on the basis of their performance in the screening interview. Prevalence of depression was estimated using sampling weights according to the sampling stratification scheme for clinical assessment. Some 2627 individuals were evaluated at the first stage in Indianapolis and 2806 in Ibadan. All were aged 69 years and over. Of these, 451 (17.2%) underwent clinical assessment in Indianapolis, while 605 (21.6%) were assessed in Ibadan. The prevalence estimates of both mild and severe depression were similar for the two sites (p=0.1273 and p=0.7093): 12.3% (mild depression) and 2.2% (severe depression) in Indianapolis and 19.8% and 1.6% respectively in Ibadan. Some differences were identified in association with demographic characteristics; for example, Ibadan men had a significantly higher prevalence of mild depression than Indianapolis men (p<0.0001). Poor cognitive performance was associated with significantly higher rates of depression in Yoruba (p=0.0039). Prevalence of depression was similar for elderly African Americans and Yoruba despite considerable socioeconomic and cultural differences between these populations.
Baiyewu, Olusegun; Smith-Gamble, Valerie; Lane, Kathleen A.; Gureje, Oye; Gao, Sujuan; Ogunniyi, Adesola; Unverzagt, Frederick W.; Hall, Kathleen S.; Hendrie, Hugh C.
2010-01-01
Background This is a community-based longitudinal epidemiological comparative study of elderly African Americans in Indianapolis and elderly Yoruba in Ibadan, Nigeria. Method A two-stage study was designed in which community-based individuals were first screened using the Community Screening Interview for Dementia. The second stage was a full clinical assessment, which included use of the Geriatric Depression Scale, of a smaller sub-sample of individuals selected on the basis of their performance in the screening interview. Prevalence of depression was estimated using sampling weights according to the sampling stratification scheme for clinical assessment. Results Some 2627 individuals were evaluated at the first stage in Indianapolis and 2806 in Ibadan. All were aged 69 years and over. Of these, 451 (17.2%) underwent clinical assessment in Indianapolis, while 605 (21.6%) were assessed in Ibadan. The prevalence estimates of both mild and severe depression were similar for the two sites (p = 0.1273 and p = 0.7093): 12.3% (mild depression) and 2.2% (severe depression) in Indianapolis and 19.8% and 1.6% respectively in Ibadan. Some differences were identified in association with demographic characteristics; for example, Ibadan men had a significantly higher prevalence of mild depression than Indianapolis men (p < 0.0001). Poor cognitive performance was associated with significantly higher rates of depression in Yoruba (p = 0.0039). Conclusion Prevalence of depression was similar for elderly African Americans and Yoruba despite considerable socioeconomic and cultural differences between these populations. PMID:17506912
Diederich, Adele
2008-02-01
Recently, Diederich and Busemeyer (2006) evaluated three hypotheses formulated as particular versions of a sequential-sampling model to account for the effects of payoffs in a perceptual decision task with time constraints. The bound-change hypothesis states that payoffs affect the distance of the starting position of the decision process to each decision bound. The drift-rate-change hypothesis states that payoffs affect the drift rate of the decision process. The two-stage-processing hypothesis assumes two processes, one for processing payoffs and another for processing stimulus information, and that on a given trial, attention switches from one process to the other. The latter hypothesis gave the best account of their data. The present study investigated two questions: (1) Does the experimental setting influence decisions, and consequently affect the fits of the hypotheses? A task was conducted in two experimental settings--either the time limit or the payoff matrix was held constant within a given block of trials, using three different payoff matrices and four different time limits--in order to answer this question. (2) Could it be that participants neglect payoffs on some trials and stimulus information on others? To investigate this idea, a further hypothesis was considered, the mixture-of-processes hypothesis. Like the two-stage-processing hypothesis, it postulates two processes, one for payoffs and another for stimulus information. However, it differs from the previous hypothesis in assuming that on a given trial exactly one of the processes operates, never both. The present design had no effect on choice probability but may have affected choice response times (RTs). Overall, the two-stage-processing hypothesis gave the best account, with respect both to choice probabilities and to observed mean RTs and mean RT patterns within a choice pair.
Stages of Change in Relationship Status Questionnaire: Development and Validation
ERIC Educational Resources Information Center
Ritter, Kathrin; Handsel, Vanessa; Moore, Todd
2016-01-01
This study involved the development of the Stages of Change in Relationship Status (SOCRS) measure in 2 samples of college students. This scale is designed to measure how individuals progress through stages of change when terminating violent and nonviolent intimate relationships. Results indicated that the SOCRS is a reliable and valid tool to…
Lau, Tze Pheng; Roslani, April Camilla; Lian, Lay Hoong; Chai, Hwa Chia; Lee, Ping Chin; Hilmi, Ida; Goh, Khean Lee; Chua, Kek Heng
2014-01-01
Objectives To characterise the mRNA expression patterns of early and advanced stage colorectal adenocarcinomas of Malaysian patients. Design Comparative expression analysis. Setting and participants We performed a combination of annealing control primer (ACP)-based PCR and reverse transcription-quantitative real-time PCR for the identification of differentially expressed genes (DEGs) associated with early and advanced stage primary colorectal tumours. We recruited four paired samples from patients with colorectal cancer (CRC) of Dukes’ A and B for the preliminary differential expression study, and a total of 27 paired samples, ranging from CRC stages I to IV, for subsequent confirmatory test. The tumouric samples were obtained from the patients with CRC undergoing curative surgical resection without preoperative chemoradiotherapy. The recruited patients with CRC were newly diagnosed with CRC, and were not associated with any hereditary syndromes, previously diagnosed cancer or positive family history of CRC. The paired non-cancerous tissue specimens were excised from macroscopically normal colonic mucosa distally located from the colorectal tumours. Primary and secondary outcome measures The differential mRNA expression patterns of early and advanced stage colorectal adenocarcinomas compared with macroscopically normal colonic mucosa were characterised by ACP-based PCR and reverse transcription-quantitative real-time PCR. Results The RPL35, RPS23 and TIMP1 genes were found to be overexpressed in both early and advanced stage colorectal adenocarcinomas (p<0.05). However, the ARPC2 gene was significantly underexpressed in early colorectal adenocarcinomas, while the advanced stage primary colorectal tumours exhibited an additional overexpression of the C6orf173 gene (p<0.05). Conclusions We characterised two distinctive gene expression patterns to aid in the stratification of primary colorectal neoplasms among Malaysian patients with CRC. Further work can be done to assess and compare the mRNA expression levels of these identified DEGs between each CRC stage group, stages I–IV. PMID:25107436
Ergonomics intervention in an Iranian television manufacturing industry.
Motamedzade, M; Mohseni, M; Golmohammadi, R; Mahjoob, H
2011-01-01
The primary goal of this study was to use the Strain Index (SI) to assess the risk of developing upper extremity musculoskeletal disorders in a television (TV) manufacturing industry and evaluate the effectiveness of an educational intervention. The project was designed and implemented in two stages. In first stage, the SI score was calculated and the Nordic Musculoskeletal Questionnaire (NMQ) was completed. Following this, hazardous jobs were identified and existing risk factors in these jobs were studied. Based on these data, an educational intervention was designed and implemented. In the second stage, three months after implementing the interventions, the SI score was re-calculated and the Nordic Musculoskeletal Questionnaire (NMQ) completed again. 80 assembly workers of an Iranian TV manufacturing industry were randomly selected using simple random sampling approach. The results showed that the SI score had a good correlation with the symptoms of musculoskeletal disorders. It was also observed that the difference between prevalence of signs and symptoms of musculoskeletal disorders, before and after intervention, was significantly reduced. A well conducted implementation of an interventional program with total participation of all stakeholders can lead to a decrease in musculoskeletal disorders.
NASA Technical Reports Server (NTRS)
Prahst, Patricia S.; Kulkarni, Sameer; Sohn, Ki H.
2015-01-01
NASA's Environmentally Responsible Aviation (ERA) Program calls for investigation of the technology barriers associated with improved fuel efficiency for large gas turbine engines. Under ERA, the highly loaded core compressor technology program attempts to realize the fuel burn reduction goal by increasing overall pressure ratio of the compressor to increase thermal efficiency of the engine. Study engines with overall pressure ratio of 60 to 70 are now being investigated. This means that the high pressure compressor would have to almost double in pressure ratio while keeping a high level of efficiency. NASA and GE teamed to address this challenge by testing the first two stages of an advanced GE compressor designed to meet the requirements of a very high pressure ratio core compressor. Previous test experience of a compressor which included these front two stages indicated a performance deficit relative to design intent. Therefore, the current rig was designed to run in 1-stage and 2-stage configurations in two separate tests to assess whether the bow shock of the second rotor interacting with the upstream stage contributed to the unpredicted performance deficit, or if the culprit was due to interaction of rotor 1 and stator 1. Thus, the goal was to fully understand the stage 1 performance under isolated and multi-stage conditions, and additionally to provide a detailed aerodynamic data set for CFD validation. Full use was made of steady and unsteady measurement methods to understand fluid dynamics loss source mechanisms due to rotor shock interaction and endwall losses. This paper will present the description of the compressor test article and its measured performance and operability, for both the single stage and two stage configurations. We focus the paper on measurements at 97% corrected speed with design intent vane setting angles.
August, Gerald J; Piehler, Timothy F; Bloomquist, Michael L
2016-01-01
The development of adaptive treatment strategies (ATS) represents the next step in innovating conduct problems prevention programs within a juvenile diversion context. Toward this goal, we present the theoretical rationale, associated methods, and anticipated challenges for a feasibility pilot study in preparation for implementing a full-scale SMART (i.e., sequential, multiple assignment, randomized trial) for conduct problems prevention. The role of a SMART design in constructing ATS is presented. The SMART feasibility pilot study includes a sample of 100 youth (13-17 years of age) identified by law enforcement as early stage offenders and referred for precourt juvenile diversion programming. Prior data on the sample population detail a high level of ethnic diversity and approximately equal representations of both genders. Within the SMART, youth and their families are first randomly assigned to one of two different brief-type evidence-based prevention programs, featuring parent-focused behavioral management or youth-focused strengths-building components. Youth who do not respond sufficiently to brief first-stage programming will be randomly assigned a second time to either an extended parent- or youth-focused second-stage programming. Measures of proximal intervention response and measures of potential candidate tailoring variables for developing ATS within this sample are detailed. Results of the described pilot study will include information regarding feasibility and acceptability of the SMART design. This information will be used to refine a subsequent full-scale SMART. The use of a SMART to develop ATS for prevention will increase the efficiency and effectiveness of prevention programing for youth with developing conduct problems.
Display Considerations For Intravascular Ultrasonic Imaging
NASA Astrophysics Data System (ADS)
Gessert, James M.; Krinke, Charlie; Mallery, John A.; Zalesky, Paul J.
1989-08-01
A display has been developed for intravascular ultrasonic imaging. Design of this display has a primary goal of providing guidance information for therapeutic interventions such as balloons, lasers, and atherectomy devices. Design considerations include catheter configuration, anatomy, acoustic properties of normal and diseased tissue, catheterization laboratory and operating room environment, acoustic and electrical safety, acoustic data sampling issues, and logistical support such as image measurement, storage and retrieval. Intravascular imaging is in an early stage of development so design flexibility and expandability are very important. The display which has been developed is capable of acquisition and display of grey scale images at rates varying from static B-scans to 30 frames per second. It stores images in a 640 X 480 X 8 bit format and is capable of black and white as well as color display in multiplevideo formats. The design is based on the industry standard PC-AT architecture and consists of two AT style circuit cards, one for high speed sampling and the other for scan conversion, graphics and video generation.
Optimal Bayesian Adaptive Design for Test-Item Calibration.
van der Linden, Wim J; Ren, Hao
2015-06-01
An optimal adaptive design for test-item calibration based on Bayesian optimality criteria is presented. The design adapts the choice of field-test items to the examinees taking an operational adaptive test using both the information in the posterior distributions of their ability parameters and the current posterior distributions of the field-test parameters. Different criteria of optimality based on the two types of posterior distributions are possible. The design can be implemented using an MCMC scheme with alternating stages of sampling from the posterior distributions of the test takers' ability parameters and the parameters of the field-test items while reusing samples from earlier posterior distributions of the other parameters. Results from a simulation study demonstrated the feasibility of the proposed MCMC implementation for operational item calibration. A comparison of performances for different optimality criteria showed faster calibration of substantial numbers of items for the criterion of D-optimality relative to A-optimality, a special case of c-optimality, and random assignment of items to the test takers.
Evaluation of bias and logistics in a survey of adults at increased risk for oral health decrements.
Gilbert, G H; Duncan, R P; Kulley, A M; Coward, R T; Heft, M W
1997-01-01
Designing research to include sufficient respondents in groups at highest risk for oral health decrements can present unique challenges. Our purpose was to evaluate bias and logistics in this survey of adults at increased risk for oral health decrements. We used a telephone survey methodology that employed both listed numbers and random digit dialing to identify dentate persons 45 years old or older and to oversample blacks, poor persons, and residents of nonmetropolitan counties. At a second stage, a subsample of the respondents to the initial telephone screening was selected for further study, which consisted of a baseline in-person interview and a clinical examination. We assessed bias due to: (1) limiting the sample to households with telephones, (2) using predominantly listed numbers instead of random digit dialing, and (3) nonresponse at two stages of data collection. While this approach apparently created some biases in the sample, they were small in magnitude. Specifically, limiting the sample to households with telephones biased the sample overall toward more females, larger households, and fewer functionally impaired persons. Using predominantly listed numbers led to a modest bias toward selection of persons more likely to be younger, healthier, female, have had a recent dental visit, and reside in smaller households. Blacks who were selected randomly at a second stage were more likely to participate in baseline data gathering than their white counterparts. Comparisons of the data obtained in this survey with those from recent national surveys suggest that this methodology for sampling high-risk groups did not substantively bias the sample with respect to two important dental parameters, prevalence of edentulousness and dental care use, nor were conclusions about multivariate associations with dental care recency substantively affected. This method of sampling persons at high risk for oral health decrements resulted in only modest bias with respect to the population of interest.
Design and Experimental Performance of a Two Stage Partial Admission Turbine, Task B.1/B.4
NASA Technical Reports Server (NTRS)
Sutton, R. F.; Boynton, J. L.; Akian, R. A.; Shea, Dan; Roschak, Edmund; Rojas, Lou; Orr, Linsey; Davis, Linda; King, Brad; Bubel, Bill
1992-01-01
A three-inch mean diameter, two-stage turbine with partial admission in each stage was experimentally investigated over a range of admissions and angular orientations of admission arcs. Three configurations were tested in which first stage admission varied from 37.4 percent (10 of 29 passages open, 5 per side) to 6.9 percent (2 open, 1 per side). Corresponding second stage admissions were 45.2 percent (14 of 31 passages open, 7 per side) and 12.9 percent (4 open, 2 per side). Angular positions of the second stage admission arcs with respect to the first stage varied over a range of 70 degrees. Design and off-design efficiency and flow characteristics for the three configurations are presented. The results indicated that peak efficiency and the corresponding isentropic velocity ratio decreased as the arcs of admission were decreased. Both efficiency and flow characteristics were sensitive to the second stage nozzle orientation angles.
Research in the design of high-performance reconfigurable systems
NASA Technical Reports Server (NTRS)
Slotnick, D. L.; Mcewan, S. D.; Spry, A. J.
1984-01-01
An initial design for the Bit Processor (BP) referred to in prior reports as the Processing Element or PE has been completed. Eight BP's, together with their supporting random-access memory, a 64 k x 9 ROM to perform addition, routing logic, and some additional logic, constitute the components of a single stage. An initial stage design is given. Stages may be combined to perform high-speed fixed or floating point arithmetic. Stages can be configured into a range of arithmetic modules that includes bit-serial one or two-dimensional arrays; one or two dimensional arrays fixed or floating point processors; and specialized uniprocessors, such as long-word arithmetic units. One to eight BP's represent a likely initial chip level. The Stage would then correspond to a first-level pluggable module. As both this project and VLSI CAD/CAM progress, however, it is expected that the chip level would migrate upward to the stage and, perhaps, ultimately the box level. The BP RAM, consisting of two banks, holds only operands and indices. Programs are at the box (high-level function) and system level. At the system level initial effort has been concentrated on specifying the tools needed to evaluate design alternatives.
Moerbeek, Mirjam
2018-01-01
Background This article studies the design of trials that compare three treatment conditions that are delivered by two types of health professionals. The one type of health professional delivers one treatment, and the other type delivers two treatments, hence, this design is a combination of a nested and crossed design. As each health professional treats multiple patients, the data have a nested structure. This nested structure has thus far been ignored in the design of such trials, which may result in an underestimate of the required sample size. In the design stage, the sample sizes should be determined such that a desired power is achieved for each of the three pairwise comparisons, while keeping costs or sample size at a minimum. Methods The statistical model that relates outcome to treatment condition and explicitly takes the nested data structure into account is presented. Mathematical expressions that relate sample size to power are derived for each of the three pairwise comparisons on the basis of this model. The cost-efficient design achieves sufficient power for each pairwise comparison at lowest costs. Alternatively, one may minimize the total number of patients. The sample sizes are found numerically and an Internet application is available for this purpose. The design is also compared to a nested design in which each health professional delivers just one treatment. Results Mathematical expressions show that this design is more efficient than the nested design. For each pairwise comparison, power increases with the number of health professionals and the number of patients per health professional. The methodology of finding a cost-efficient design is illustrated using a trial that compares treatments for social phobia. The optimal sample sizes reflect the costs for training and supervising psychologists and psychiatrists, and the patient-level costs in the three treatment conditions. Conclusion This article provides the methodology for designing trials that compare three treatment conditions while taking the nesting of patients within health professionals into account. As such, it helps to avoid underpowered trials. To use the methodology, a priori estimates of the total outcome variances and intraclass correlation coefficients must be obtained from experts’ opinions or findings in the literature. PMID:29316807
Griffiths, Ronald E.; Topping, David J.; Anderson, Robert S.; Hancock, Gregory S.; Melis, Theodore S.
2014-01-01
Management of sediment in rivers downstream from dams requires knowledge of both the sediment supply and downstream sediment transport. In some dam-regulated rivers, the amount of sediment supplied by easily measured major tributaries may overwhelm the amount of sediment supplied by the more difficult to measure lesser tributaries. In this first class of rivers, managers need only know the amount of sediment supplied by these major tributaries. However, in other regulated rivers, the cumulative amount of sediment supplied by the lesser tributaries may approach the total supplied by the major tributaries. The Colorado River downstream from Glen Canyon has been hypothesized to be one such river. If this is correct, then management of sediment in the Colorado River in the part of Glen Canyon National Recreation Area downstream from the dam and in Grand Canyon National Park may require knowledge of the sediment supply from all tributaries. Although two major tributaries, the Paria and Little Colorado Rivers, are well documented as the largest two suppliers of sediment to the Colorado River downstream from Glen Canyon Dam, the contributions of sediment supplied by the ephemeral lesser tributaries of the Colorado River in the lowermost Glen Canyon, and Marble and Grand Canyons are much less constrained. Previous studies have estimated amounts of sediment supplied by these tributaries ranging from very little to almost as much as the amount supplied by the Paria River. Because none of these previous studies relied on direct measurement of sediment transport in any of the ephemeral tributaries in Glen, Marble, or Grand Canyons, there may be significant errors in the magnitudes of sediment supplies estimated during these studies. To reduce the uncertainty in the sediment supply by better constraining the sediment yield of the ephemeral lesser tributaries, the U.S. Geological Survey Grand Canyon Monitoring and Research Center established eight sediment-monitoring gaging stations beginning in 2000 on the larger of the previously ungaged tributaries of the Colorado River downstream from Glen Canyon Dam. The sediment-monitoring gaging stations consist of a downward-looking stage sensor and passive suspended-sediment samplers. Two stations are equipped with automatic pump samplers to collect suspended-sediment samples during flood events. Directly measuring discharge and collecting suspended-sediment samples in these remote ephemeral streams during significant sediment-transporting events is nearly impossible; most significant run-off events are short-duration events (lasting minutes to hours) associated with summer thunderstorms. As the remote locations and short duration of these floods make it prohibitively expensive, if not impossible, to directly measure the discharge of water or collect traditional depth-integrated suspended-sediment samples, a method of calculating sediment loads was developed that includes documentation of stream stages by field instrumentation, modeling of discharges associated with these stages, and automatic suspended-sediment measurements. The approach developed is as follows (1) survey and model flood high-water marks using a two-dimensional hydrodynamic model, (2) create a stage-discharge relation for each site by combining the modeled flood flows with the measured stage record, (3) calculate the discharge record for each site using the stage-discharge relation and the measured stage record, and (4) calculate the instantaneous and cumulative sediment loads using the discharge record and suspended-sediment concentrations measured from samples collected with passive US U-59 samplers and ISCOTM pump samplers. This paper presents the design of the gaging network and briefly describes the methods used to calculate discharge and sediment loads. The design and methods herein can easily be used at other remote locations where discharge and sediment loads are required.
Compound estimation procedures in reliability
NASA Technical Reports Server (NTRS)
Barnes, Ron
1990-01-01
At NASA, components and subsystems of components in the Space Shuttle and Space Station generally go through a number of redesign stages. While data on failures for various design stages are sometimes available, the classical procedures for evaluating reliability only utilize the failure data on the present design stage of the component or subsystem. Often, few or no failures have been recorded on the present design stage. Previously, Bayesian estimators for the reliability of a single component, conditioned on the failure data for the present design, were developed. These new estimators permit NASA to evaluate the reliability, even when few or no failures have been recorded. Point estimates for the latter evaluation were not possible with the classical procedures. Since different design stages of a component (or subsystem) generally have a good deal in common, the development of new statistical procedures for evaluating the reliability, which consider the entire failure record for all design stages, has great intuitive appeal. A typical subsystem consists of a number of different components and each component has evolved through a number of redesign stages. The present investigations considered compound estimation procedures and related models. Such models permit the statistical consideration of all design stages of each component and thus incorporate all the available failure data to obtain estimates for the reliability of the present version of the component (or subsystem). A number of models were considered to estimate the reliability of a component conditioned on its total failure history from two design stages. It was determined that reliability estimators for the present design stage, conditioned on the complete failure history for two design stages have lower risk than the corresponding estimators conditioned only on the most recent design failure data. Several models were explored and preliminary models involving bivariate Poisson distribution and the Consael Process (a bivariate Poisson process) were developed. Possible short comings of the models are noted. An example is given to illustrate the procedures. These investigations are ongoing with the aim of developing estimators that extend to components (and subsystems) with three or more design stages.
Assessing Compliance-Effect Bias in the Two Stage Least Squares Estimator
ERIC Educational Resources Information Center
Reardon, Sean; Unlu, Fatih; Zhu, Pei; Bloom, Howard
2011-01-01
The proposed paper studies the bias in the two-stage least squares, or 2SLS, estimator that is caused by the compliance-effect covariance (hereafter, the compliance-effect bias). It starts by deriving the formula for the bias in an infinite sample (i.e., in the absence of finite sample bias) under different circumstances. Specifically, it…
Molins, Claudia R.; Sexton, Christopher; Young, John W.; Ashton, Laura V.; Pappert, Ryan; Beard, Charles B.
2014-01-01
Serological assays and a two-tiered test algorithm are recommended for laboratory confirmation of Lyme disease. In the United States, the sensitivity of two-tiered testing using commercially available serology-based assays is dependent on the stage of infection and ranges from 30% in the early localized disease stage to near 100% in late-stage disease. Other variables, including subjectivity in reading Western blots, compliance with two-tiered recommendations, use of different first- and second-tier test combinations, and use of different test samples, all contribute to variation in two-tiered test performance. The availability and use of sample sets from well-characterized Lyme disease patients and controls are needed to better assess the performance of existing tests and for development of improved assays. To address this need, the Centers for Disease Control and Prevention and the National Institutes of Health prospectively collected sera from patients at all stages of Lyme disease, as well as healthy donors and patients with look-alike diseases. Patients and healthy controls were recruited using strict inclusion and exclusion criteria. Samples from all included patients were retrospectively characterized by two-tiered testing. The results from two-tiered testing corroborated the need for novel and improved diagnostics, particularly for laboratory diagnosis of earlier stages of infection. Furthermore, the two-tiered results provide a baseline with samples from well-characterized patients that can be used in comparing the sensitivity and specificity of novel diagnostics. Panels of sera and accompanying clinical and laboratory testing results are now available to Lyme disease serological test users and researchers developing novel tests. PMID:25122862
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
Design considerations for genetic linkage and association studies.
Nsengimana, Jérémie; Bishop, D Timothy
2012-01-01
This chapter describes the main issues that genetic epidemiologists usually consider in the design of linkage and association studies. For linkage, we briefly consider the situation of rare, highly penetrant alleles showing a disease pattern consistent with Mendelian inheritance investigated through parametric methods in large pedigrees or with autozygosity mapping in inbred families, and we then turn our focus to the most common design, affected sibling pairs, of more relevance for common, complex diseases. Theoretical and more practical power and sample size calculations are provided as a function of the strength of the genetic effect being investigated. We also discuss the impact of other determinants of statistical power such as disease heterogeneity, pedigree, and genotyping errors, as well as the effect of the type and density of genetic markers. Linkage studies should be as large as possible to have sufficient power in relation to the expected genetic effect size. Segregation analysis, a formal statistical technique to describe the underlying genetic susceptibility, may assist in the estimation of the relevant parameters to apply, for instance. However, segregation analyses estimate the total genetic component rather than a single-locus effect. Locus heterogeneity should be considered when power is estimated and at the analysis stage, i.e. assuming smaller locus effect than the total the genetic component from segregation studies. Disease heterogeneity should be minimised by considering subtypes if they are well defined or by otherwise collecting known sources of heterogeneity and adjusting for them as covariates; the power will depend upon the relationship between the disease subtype and the underlying genotypes. Ultimately, identifying susceptibility alleles of modest effects (e.g. RR≤1.5) requires a number of families that seem unfeasible in a single study. Meta-analysis and data pooling between different research groups can provide a sizeable study, but both approaches require even a higher level of vigilance about locus and disease heterogeneity when data come from different populations. All necessary steps should be taken to minimise pedigree and genotyping errors at the study design stage as they are, for the most part, due to human factors. A two-stage design is more cost-effective than one stage when using short tandem repeats (STRs). However, dense single-nucleotide polymorphism (SNP) arrays offer a more robust alternative, and due to their lower cost per unit, the total cost of studies using SNPs may in the future become comparable to that of studies using STRs in one or two stages. For association studies, we consider the popular case-control design for dichotomous phenotypes, and we provide power and sample size calculations for one-stage and multistage designs. For candidate genes, guidelines are given on the prioritisation of genetic variants, and for genome-wide association studies (GWAS), the issue of choosing an appropriate SNP array is discussed. A warning is issued regarding the danger of designing an underpowered replication study following an initial GWAS. The risk of finding spurious association due to population stratification, cryptic relatedness, and differential bias is underlined. GWAS have a high power to detect common variants of high or moderate effect. For weaker effects (e.g. relative risk<1.2), the power is greatly reduced, particularly for recessive loci. While sample sizes of 10,000 or 20,000 cases are not beyond reach for most common diseases, only meta-analyses and data pooling can allow attaining a study size of this magnitude for many other diseases. It is acknowledged that detecting the effects from rare alleles (i.e. frequency<5%) is not feasible in GWAS, and it is expected that novel methods and technology, such as next-generation resequencing, will fill this gap. At the current stage, the choice of which GWAS SNP array to use does not influence the power in populations of European ancestry. A multistage design reduces the study cost but has less power than the standard one-stage design. If one opts for a multistage design, the power can be improved by jointly analysing the data from different stages for the SNPs they share. The estimates of locus contribution to disease risk from genome-wide scans are often biased, and relying on them might result in an underpowered replication study. Population structure has so far caused less spurious associations than initially feared, thanks to systematic ethnicity matching and application of standard quality control measures. Differential bias could be a more serious threat and must be minimised by strictly controlling all the aspects of DNA acquisition, storage, and processing.
Optics of two-stage photovoltaic concentrators with dielectric second stages.
Ning, X; O'Gallagher, J; Winston, R
1987-04-01
Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.
Optics of two-stage photovoltaic concentrators with dielectric second stages
NASA Astrophysics Data System (ADS)
Ning, Xiaohui; O'Gallagher, Joseph; Winston, Roland
1987-04-01
Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.
Bowyer, A E; Hillarp, A; Ezban, M; Persson, P; Kitchen, S
2016-07-01
Essentials Validated assays are required to precisely measure factor IX (FIX) activity in FIX products. N9-GP and two other FIX products were assessed in various coagulation assay systems at two sites. Large variations in FIX activity measurements were observed for N9-GP using some assays. One-stage and chromogenic assays accurately measuring FIX activity for N9-GP were identified. Background Measurement of factor IX activity (FIX:C) with activated partial thromboplastin time-based one-stage clotting assays is associated with a large degree of interlaboratory variation in samples containing glycoPEGylated recombinant FIX (rFIX), i.e. nonacog beta pegol (N9-GP). Validation and qualification of specific assays and conditions are necessary for the accurate assessment of FIX:C in samples containing N9-GP. Objectives To assess the accuracy of various one-stage clotting and chromogenic assays for measuring FIX:C in samples containing N9-GP as compared with samples containing rFIX or plasma-derived FIX (pdFIX) across two laboratory sites. Methods FIX:C, in severe hemophilia B plasma spiked with a range of concentrations (from very low, i.e. 0.03 IU mL(-1) , to high, i.e. 0.90 IU mL(-1) ) of N9-GP, rFIX (BeneFIX), and pdFIX (Mononine), was determined at two laboratory sites with 10 commercially available one-stage clotting assays and two chromogenic FIX:C assays. Assays were performed with a plasma calibrator and different analyzers. Results A high degree of variation in FIX:C measurement was observed for one-stage clotting assays for N9-GP as compared with rFIX or pdFIX. Acceptable N9-GP recovery was observed in the low-concentration to high-concentration samples tested with one-stage clotting assays using SynthAFax or DG Synth, or with chromogenic FIX:C assays. Similar patterns of FIX:C measurement were observed at both laboratory sites, with minor differences probably being attributable to the use of different analyzers. Conclusions These results suggest that, of the reagents tested, FIX:C in N9-GP-containing plasma samples can be most accurately measured with one-stage clotting assays using SynthAFax or DG Synth, or with chromogenic FIX:C assays. © 2016 International Society on Thrombosis and Haemostasis.
2016 Workplace and Gender Relations Survey of Active Duty Members: Frequently Asked Questions
2017-05-01
active duty population both at the sample design stage as well as during the statistical weighting process to account for survey non-response and post...used the OPA sampling design , won the 2011 Policy Impact Award from The American Association for Public Opinion Research (AAPOR), which “recognizes
The Malemute development program. [rocket upper stage engine design
NASA Technical Reports Server (NTRS)
Bolster, W. J.; Hoekstra, P. W.
1976-01-01
The Malemute vehicle systems are two-stage systems based on utilizing a new high performance upper stage motor with two existing military boosters. The Malmute development program is described relative to program structure, preliminary design, vehicle subsystems, and the Malemute motor. Two vehicle systems, the Nike-Malemute and Terrier-Malemute, were developed which are capable of transporting comparatively large diameter (16 in.) 200-lb payloads to altitudes of 500 and 700 km, respectively. These vehicles provide relatively low-cost transportation with two-stage reliability and launch simplicity. Flight tests of both vehicle systems revealed their performance capabilities, with the Terrier-Malemute system involving a unique Malemute motor spin sensitivity problem. It is suggested that the vehicles can be successfully flown by lowering the burnout spin rate.
Performance of two-stage fan having low-aspect-ratio first-stage rotor blading
NASA Technical Reports Server (NTRS)
Urasek, D. C.; Gorrell, W. T.; Cunnan, W. S.
1979-01-01
The NASA two stage fan was tested with a low aspect ratio first stage rotor having no midspan dampers. At design speed the fan achieved an adiabatic design efficiency of 0.846, and peak efficiencies for the first stage and rotor of 0.870 and 0.906, respectively. Peak efficiency occurred very close to the stall line. In an attempt to improve stall margin, the fan was retested with circumferentially grooved casing treatment and with a series of stator blade resets. Results showed no improvement in stall margin with casing treatment but increased to 8 percent with stator blade reset.
Design, analysis and presentation of factorial randomised controlled trials
Montgomery, Alan A; Peters, Tim J; Little, Paul
2003-01-01
Background The evaluation of more than one intervention in the same randomised controlled trial can be achieved using a parallel group design. However this requires increased sample size and can be inefficient, especially if there is also interest in considering combinations of the interventions. An alternative may be a factorial trial, where for two interventions participants are allocated to receive neither intervention, one or the other, or both. Factorial trials require special considerations, however, particularly at the design and analysis stages. Discussion Using a 2 × 2 factorial trial as an example, we present a number of issues that should be considered when planning a factorial trial. The main design issue is that of sample size. Factorial trials are most often powered to detect the main effects of interventions, since adequate power to detect plausible interactions requires greatly increased sample sizes. The main analytical issues relate to the investigation of main effects and the interaction between the interventions in appropriate regression models. Presentation of results should reflect the analytical strategy with an emphasis on the principal research questions. We also give an example of how baseline and follow-up data should be presented. Lastly, we discuss the implications of the design, analytical and presentational issues covered. Summary Difficulties in interpreting the results of factorial trials if an influential interaction is observed is the cost of the potential for efficient, simultaneous consideration of two or more interventions. Factorial trials can in principle be designed to have adequate power to detect realistic interactions, and in any case they are the only design that allows such effects to be investigated. PMID:14633287
Lai, Lei-Jie; Gu, Guo-Ying; Zhu, Li-Min
2012-04-01
This paper presents a novel decoupled two degrees of freedom (2-DOF) translational parallel micro-positioning stage. The stage consists of a monolithic compliant mechanism driven by two piezoelectric actuators. The end-effector of the stage is connected to the base by four independent kinematic limbs. Two types of compound flexure module are serially connected to provide 2-DOF for each limb. The compound flexure modules and mirror symmetric distribution of the four limbs significantly reduce the input and output cross couplings and the parasitic motions. Based on the stiffness matrix method, static and dynamic models are constructed and optimal design is performed under certain constraints. The finite element analysis results are then given to validate the design model and a prototype of the XY stage is fabricated for performance tests. Open-loop tests show that maximum static and dynamic cross couplings between the two linear motions are below 0.5% and -45 dB, which are low enough to utilize the single-input-single-out control strategies. Finally, according to the identified dynamic model, an inversion-based feedforward controller in conjunction with a proportional-integral-derivative controller is applied to compensate for the nonlinearities and uncertainties. The experimental results show that good positioning and tracking performances are achieved, which verifies the effectiveness of the proposed mechanism and controller design. The resonant frequencies of the loaded stage at 2 kg and 5 kg are 105 Hz and 68 Hz, respectively. Therefore, the performance of the stage is reasonably good in term of a 200 N load capacity. © 2012 American Institute of Physics
Conceptual design of two-stage-to-orbit hybrid launch vehicle
NASA Technical Reports Server (NTRS)
1991-01-01
The object of this design class was to design an earth-to orbit vehicle to replace the present NASA space shuttle. The major motivations for designing a new vehicle were to reduce the cost of putting payloads into orbit and to design a vehicle that could better service the space station with a faster turn-around time. Another factor considered in the design was that near-term technology was to be used. Materials, engines and other important technologies were to be realized in the next 10 to 15 years. The first concept put forth by NASA to meet these objectives was the National Aerospace Plane (NASP). The NASP is a single-stage earth-to-orbit air-breathing vehicle. This concept ran into problems with the air-breathing engine providing enough thrust in the upper atmosphere, among other things. The solution of this design class is a two-stage-to-orbit vehicle. The first stage is air-breathing and the second stage is rocket-powered, similar to the space shuttle. The second stage is mounted on the top of the first stage in a piggy-back style. The vehicle takes off horizontally using only air-breathing engines, flies to Mach six at 100,000 feet, and launches the second stage towards its orbital path. The first stage, or booster, will weigh approximately 800,000 pounds and the second stage, or orbiter will weigh approximately 300,000 pounds. The major advantage of this design is the full recoverability of the first stage compared with the present solid rocket booster that are only partially recoverable and used only a few times. This reduces the cost as well as providing a more reliable and more readily available design for servicing the space station. The booster can fly an orbiter up, turn around, land, refuel, and be ready to launch another orbiter in a matter of hours.
Optimising cluster survey design for planning schistosomiasis preventive chemotherapy.
Knowles, Sarah C L; Sturrock, Hugh J W; Turner, Hugo; Whitton, Jane M; Gower, Charlotte M; Jemu, Samuel; Phillips, Anna E; Meite, Aboulaye; Thomas, Brent; Kollie, Karsor; Thomas, Catherine; Rebollo, Maria P; Styles, Ben; Clements, Michelle; Fenwick, Alan; Harrison, Wendy E; Fleming, Fiona M
2017-05-01
The cornerstone of current schistosomiasis control programmes is delivery of praziquantel to at-risk populations. Such preventive chemotherapy requires accurate information on the geographic distribution of infection, yet the performance of alternative survey designs for estimating prevalence and converting this into treatment decisions has not been thoroughly evaluated. We used baseline schistosomiasis mapping surveys from three countries (Malawi, Côte d'Ivoire and Liberia) to generate spatially realistic gold standard datasets, against which we tested alternative two-stage cluster survey designs. We assessed how sampling different numbers of schools per district (2-20) and children per school (10-50) influences the accuracy of prevalence estimates and treatment class assignment, and we compared survey cost-efficiency using data from Malawi. Due to the focal nature of schistosomiasis, up to 53% simulated surveys involving 2-5 schools per district failed to detect schistosomiasis in low endemicity areas (1-10% prevalence). Increasing the number of schools surveyed per district improved treatment class assignment far more than increasing the number of children sampled per school. For Malawi, surveys of 15 schools per district and 20-30 children per school reliably detected endemic schistosomiasis and maximised cost-efficiency. In sensitivity analyses where treatment costs and the country considered were varied, optimal survey size was remarkably consistent, with cost-efficiency maximised at 15-20 schools per district. Among two-stage cluster surveys for schistosomiasis, our simulations indicated that surveying 15-20 schools per district and 20-30 children per school optimised cost-efficiency and minimised the risk of under-treatment, with surveys involving more schools of greater cost-efficiency as treatment costs rose.
Shiu, A T
1998-08-01
The study aimed to investigate the significance of sense of coherence (SOC) for the perceptions of task characteristics and for stress perceptions during interruptions of public health nurses (PHNs) with children in Hong Kong. The research design employed the experience sampling method. Convenience sampling was used to recruit 20 subjects. During stage one of the study a watch was worn that gave a signal at six random times each day for seven days to complete an experience sampling diary. PHNs on average responded to 34 signals (80%) to complete the diaries which collected data on work and family juggling, task characteristics, and their effects on mood states. At stage two respondents completed the SOC scale which measured confidence in life as comprehensible, manageable, and meaningful. Two major findings provide the focus for this paper. First, results indicate that there was positive correlation between SOC and perceived task characteristics. Second, results reveal that when interruptions occurred, PHNs with high SOC had higher positive affect and lower negative affect than PHNs with low SOC. These results suggest that SOC as a salutogenic model helps PHNs to cope with the family and work juggling as well as the occupational stress. Implications for nursing management on strengthening SOC of PHNs are discussed.
Martínez-Ferrer, María Teresa; Ripollés, José Luís; Garcia-Marí, Ferran
2006-06-01
The spatial distribution of the citrus mealybug, Planococcus citri (Risso) (Homoptera: Pseudococcidae), was studied in citrus groves in northeastern Spain. Constant precision sampling plans were designed for all developmental stages of citrus mealybug under the fruit calyx, for late stages on fruit, and for females on trunks and main branches; more than 66, 286, and 101 data sets, respectively, were collected from nine commercial fields during 1992-1998. Dispersion parameters were determined using Taylor's power law, giving aggregated spatial patterns for citrus mealybug populations in three locations of the tree sampled. A significant relationship between the number of insects per organ and the percentage of occupied organs was established using either Wilson and Room's binomial model or Kono and Sugino's empirical formula. Constant precision (E = 0.25) sampling plans (i.e., enumerative plans) for estimating mean densities were developed using Green's equation and the two binomial models. For making management decisions, enumerative counts may be less labor-intensive than binomial sampling. Therefore, we recommend enumerative sampling plans for the use in an integrated pest management program in citrus. Required sample sizes for the range of population densities near current management thresholds, in the three plant locations calyx, fruit, and trunk were 50, 110-330, and 30, respectively. Binomial sampling, especially the empirical model, required a higher sample size to achieve equivalent levels of precision.
Semi-automated high-efficiency reflectivity chamber for vacuum UV measurements
NASA Astrophysics Data System (ADS)
Wiley, James; Fleming, Brian; Renninger, Nicholas; Egan, Arika
2017-08-01
This paper presents the design and theory of operation for a semi-automated reflectivity chamber for ultraviolet optimized optics. A graphical user interface designed in LabVIEW controls the stages, interfaces with the detector system, takes semi-autonomous measurements, and monitors the system in case of error. Samples and an optical photodiode sit on an optics plate mounted to a rotation stage in the middle of the vacuum chamber. The optics plate rotates the samples and diode between an incident and reflected position to measure the absolute reflectivity of the samples at wavelengths limited by the monochromator operational bandpass of 70 nm to 550 nm. A collimating parabolic mirror on a fine steering tip-tilt motor enables beam steering for detector peak-ups. This chamber is designed to take measurements rapidly and with minimal oversight, increasing lab efficiency for high cadence and high accuracy vacuum UV reflectivity measurements.
Translating Climate Projections for Bridge Engineering
NASA Astrophysics Data System (ADS)
Anderson, C.; Takle, E. S.; Krajewski, W.; Mantilla, R.; Quintero, F.
2015-12-01
A bridge vulnerability pilot study was conducted by Iowa Department of Transportation (IADOT) as one of nineteen pilots supported by the Federal Highway Administration Climate Change Resilience Pilots. Our pilot study team consisted of the IADOT senior bridge engineer who is the preliminary design section leader as well as climate and hydrological scientists. The pilot project culminated in a visual graphic designed by the bridge engineer (Figure 1), and an evaluation framework for bridge engineering design. The framework has four stages. The first two stages evaluate the spatial and temporal resolution needed in climate projection data in order to be suitable for input to a hydrology model. The framework separates streamflow simulation error into errors from the streamflow model and from the coarseness of input weather data series. In the final two stages, the framework evaluates credibility of climate projection streamflow simulations. Using an empirically downscaled data set, projection streamflow is generated. Error is computed in two time frames: the training period of the empirical downscaling methodology, and an out-of-sample period. If large errors in projection streamflow were observed during the training period, it would indicate low accuracy and, therefore, low credibility. If large errors in streamflow were observed during the out-of-sample period, it would mean the approach may not include some causes of change and, therefore, the climate projections would have limited credibility for setting expectations for changes. We address uncertainty with confidence intervals on quantiles of streamflow discharge. The results show the 95% confidence intervals have significant overlap. Nevertheless, the use of confidence intervals enabled engineering judgement. In our discussions, we noted the consistency in direction of change across basins, though the flood mechanism was different across basins, and the high bound of bridge lifetime period quantiles exceeded that of the historical period. This suggested the change was not isolated, and it systemically altered the risk profile. One suggestion to incorporate engineering judgement was to consider degrees of vulnerability using the median discharge of the historical period and the upper bound discharge for the bridge lifetime period.
Core compressor exit stage study. Volume 1: Blading design. [turbofan engines
NASA Technical Reports Server (NTRS)
Wisler, D. C.
1977-01-01
A baseline compressor test stage was designed as well as a candidate rotor and two candidate stators that have the potential of reducing endwall losses relative to the baseline stage. These test stages are typical of those required in the rear stages of advanced, highly-loaded core compressors. The baseline Stage A is a low-speed model of Stage 7 of the 10 stage AMAC compressor. Candidate Rotor B uses a type of meanline in the tip region that unloads the leading edge and loads the trailing edge relative to the baseline Rotor A design. Candidate Stator B embodies twist gradients in the endwall region. Candidate Stator C embodies airfoil sections near the endwalls that have reduced trailing edge loading relative to Stator A. Tests will be conducted using four identical stages of blading so that the designs described will operate in a true multistage environment.
NASA Astrophysics Data System (ADS)
Farhat; Asnir, R. A.; Yudhistira, A.; Daulay, E. R.; Muzakkir, M. M.; Yulius, S.
2018-03-01
Molecular biological research on nasopharyngeal carcinoma has been widely practiced, such as VEGF, EGFR, COX-2 expression and so on. MAPK plays a role in cell growth such as proliferation, differentiation, and apoptosis, primarily contributing to gene expression, where p38 MAPK pathway mostly associate with anti-apoptosis and cause cell transformation. The aim of this study is to determine the expression of p38 MAPK in clinical stage of nasopharyngeal carcinoma so that the result can be helpful in prognosis and adjunctive therapy in nasopharyngeal carcinoma. The research design is descriptive. It was done in THT- KL Department of FK USU/RSUP Haji Adam Malik, Medan and Pathology Anatomical Department of FK USU. The study was conducted from December 2011 to May 2012. The Samples are all patients who diagnosed with nasopharyngeal carcinoma in oncology division of Otorhinolaryngology Department. p38 MAPK overexpression was found in 21 samples (70%) from 30 nasopharyngeal carcinoma samples. The elevated of p38 MAPK expression most found on T4 by eight samples (38.1%), N3 lymph node group by nine samples (42.9%), stage IV of clinical staging is as many as 15 samples (71.4%). p38 MAPK most expressed in stage IV clinical staging of patients with nasopharyngeal carcinoma.
BLIND ordering of large-scale transcriptomic developmental timecourses.
Anavy, Leon; Levin, Michal; Khair, Sally; Nakanishi, Nagayasu; Fernandez-Valverde, Selene L; Degnan, Bernard M; Yanai, Itai
2014-03-01
RNA-Seq enables the efficient transcriptome sequencing of many samples from small amounts of material, but the analysis of these data remains challenging. In particular, in developmental studies, RNA-Seq is challenged by the morphological staging of samples, such as embryos, since these often lack clear markers at any particular stage. In such cases, the automatic identification of the stage of a sample would enable previously infeasible experimental designs. Here we present the 'basic linear index determination of transcriptomes' (BLIND) method for ordering samples comprising different developmental stages. The method is an implementation of a traveling salesman algorithm to order the transcriptomes according to their inter-relationships as defined by principal components analysis. To establish the direction of the ordered samples, we show that an appropriate indicator is the entropy of transcriptomic gene expression levels, which increases over developmental time. Using BLIND, we correctly recover the annotated order of previously published embryonic transcriptomic timecourses for frog, mosquito, fly and zebrafish. We further demonstrate the efficacy of BLIND by collecting 59 embryos of the sponge Amphimedon queenslandica and ordering their transcriptomes according to developmental stage. BLIND is thus useful in establishing the temporal order of samples within large datasets and is of particular relevance to the study of organisms with asynchronous development and when morphological staging is difficult.
ERIC Educational Resources Information Center
Plotnikoff, Ronald C.; Lippke, Sonia; Reinbold-Matthews, Melissa; Courneya, Kerry S.; Karunamuni, Nandini; Sigal, Ronald J.; Birkett, Nicholas
2007-01-01
This study was designed to test the validity of a transtheoretical model's physical activity (PA) stage measure with intention and different intensities of behavior in a large population-based sample of adults living with diabetes (Type 1 diabetes, n = 697; Type 2 diabetes, n = 1,614) and examine different age groups. The overall…
Raina, Sunil Kumar; Mengi, Vijay; Singh, Gurdeep
2012-07-01
Breast feeding is universally and traditionally practicised in India. Experts advocate breast feeding as the best method of feeding young infants. To assess the role of various factors in determining colostrum feeding in block R. S. Pura of district Jammu. A stratified two-stage design with villages as the primary sampling unit and lactating mothers as secondary sampling unit. Villages were divided into different clusters on the basis of population and sampling units were selected by a simple random technique. Breastfeeding is almost universal in R. S. Pura. Differentials in discarding the first milk were not found to be important among various socioeconomic groups and the phenomenon appeared more general than specific.
NASA Technical Reports Server (NTRS)
Burger, G. D.; Hodges, T. R.; Keenan, M. J.
1975-01-01
A two stage fan with a 1st-stage rotor design tip speed of 1450 ft/sec, a design pressure ratio of 2.8, and corrected flow of 184.2 lbm/sec was tested with axial skewed slots in the casings over the tips of both rotors. The variable stagger stators were set in the nominal positions. Casing treatment improved stall margin by nine percentage points at 70 percent speed but decreased stall margin, efficiency, and flow by small amounts at design speed. Treatment improved first stage performance at low speed only and decreased second stage performance at all operating conditions. Casing treatment did not affect the stall line with tip radially distorted flow but improved stall margin with circumferentially distorted flow. Casing treatment increased the attenuation for both types of inlet flow distortion.
Morgan, A R; Turic, D; Jehu, L; Hamilton, G; Hollingworth, P; Moskvina, V; Jones, L; Lovestone, S; Brayne, C; Rubinsztein, D C; Lawlor, B; Gill, M; O'Donovan, M C; Owen, M J; Williams, J
2007-09-05
Late-onset Alzheimer's disease (LOAD) is a common neurodegenerative disorder, with a complex etiology. APOE is the only confirmed susceptibility gene for LOAD. Others remain yet to be found. Evidence from linkage studies suggests that a gene (or genes) conferring susceptibility for LOAD resides on chromosome 10. We studied 23 positional/functional candidate genes from our linkage region on chromosome 10 (APBB1IP, ALOX5, AD037, SLC18A3, DKK1, ZWINT, ANK3, UBE2D1, CDC2, SIRT1, JDP1, NET7, SUPV3L1, NEN3, SAR1, SGPL1, SEC24C, CAMK2G, PP3CB, SNCG, CH25H, PLCE1, ANXV111) in the MRC genetic resource for LOAD. These candidates were screened for sequence polymorphisms in a sample of 14 LOAD subjects and detected polymorphisms tested for association with LOAD in a three-stage design involving two stages of genotyping pooled DNA samples followed by a third stage in which markers showing evidence for association in the first stages were subjected to individual genotyping. One hundred and twenty polymorphisms were identified and tested in stage 1 (4 case + 4 control pools totaling 366 case and 366 control individuals). Single nucleotide polymorphisms (SNPs) showing evidence of association with LOAD were then studied in stage 2 (8 case + 4 control pools totaling 1,001 case and 1,001 control individuals). Five SNPs, in four genes, showed evidence for association (P < 0.1) at stage 2 and were individually genotyped in the complete dataset, comprising 1,160 LOAD cases and 1,389 normal controls. Two SNPs in SGPL1 demonstrated marginal evidence of association, with uncorrected P values of 0.042 and 0.056, suggesting that variation in SGPL1 may confer susceptibility to LOAD. Copyright 2007 Wiley-Liss, Inc.
Ivanova, Anastasia; Zhang, Zhiwei; Thompson, Laura; Yang, Ying; Kotz, Richard M; Fang, Xin
2016-01-01
Sequential parallel comparison design (SPCD) was proposed for trials with high placebo response. In the first stage of SPCD subjects are randomized between placebo and active treatment. In the second stage placebo nonresponders are re-randomized between placebo and active treatment. Data from the population of "all comers" and the subpopulations of placebo nonresponders then combined to yield a single p-value for treatment comparison. Two-way enriched design (TED) is an extension of SPCD where active treatment responders are also re-randomized between placebo and active treatment in Stage 2. This article investigates the potential uses of SPCD and TED in medical device trials.
Ferrera, Isabel; Mas, Jordi; Taberna, Elisenda; Sanz, Joan; Sánchez, Olga
2015-01-01
The diversity of the bacterial community developed in different stages of two reverse osmosis (RO) water reclamation demonstration plants designed in a wastewater treatment plant (WWTP) in Tarragona (Spain) was characterized by applying 454-pyrosequencing of the 16S rRNA gene. The plants were fed by secondary treated effluent to a conventional pretreatment train prior to the two-pass RO system. Plants differed in the material used in the filtration process, which was sand in one demonstration plant and Scandinavian schists in the second plant. The results showed the presence of a highly diverse and complex community in the biofilms, mainly composed of members of the Betaproteobacteria and Bacteroidetes in all stages, with the presence of some typical wastewater bacteria, suggesting a feed water origin. Community similarities analyses revealed that samples clustered according to filter type, highlighting the critical influence of the biological supporting medium in biofilm community structure.
Study on a cascade pulse tube cooler with energy recovery: new method for approaching Carnot
NASA Astrophysics Data System (ADS)
Wang, L. Y.; Wu, M.; Zhu, J. K.; Jin, Z. Y.; Sun, X.; Gan, Z. H.
2015-12-01
A pulse tube cryocooler (PTC) can not achieve Carnot efficiency because the expansion work must be dissipated at the warm end of the pulse tube. How to recover this amount of dissipated work is a key for improving the PTC efficiency. A cascade PTC consists of PTCs those are staged by transmission tubes in between, these can be a two-stage or even more stages, each stage is driven by the recovered work from the last stage by a well-designed long transmission tube. It is shown that the more stages it has, the closer the efficiency will approach the Carnot efficiency. A two-stage cascade pulse tube cooler consisted of a primary and a secondary stage working at 233 K is designed, fabricated and tested in our lab. Experimental results show that the efficiency is improved by 33% compared with the single stage PTC.
A Comparison of IRT Proficiency Estimation Methods under Adaptive Multistage Testing
ERIC Educational Resources Information Center
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook
2015-01-01
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
Transcriptional analysis of late ripening stages of grapevine berry
2011-01-01
Background The composition of grapevine berry at harvest is a major determinant of wine quality. Optimal oenological maturity of berries is characterized by a high sugar/acidity ratio, high anthocyanin content in the skin, and low astringency. However, harvest time is still mostly determined empirically, based on crude biochemical composition and berry tasting. In this context, it is interesting to identify genes that are expressed/repressed specifically at the late stages of ripening and which may be used as indicators of maturity. Results Whole bunches and berries sorted by density were collected in vineyard on Chardonnay (white cultivar) grapevines for two consecutive years at three stages of ripening (7-days before harvest (TH-7), harvest (TH), and 10-days after harvest (TH+10)). Microvinification and sensory analysis indicate that the quality of the wines made from the whole bunches collected at TH-7, TH and TH+10 differed, TH providing the highest quality wines. In parallel, gene expression was studied with Qiagen/Operon microarrays using two types of samples, i.e. whole bunches and berries sorted by density. Only 12 genes were consistently up- or down-regulated in whole bunches and density sorted berries for the two years studied in Chardonnay. 52 genes were differentially expressed between the TH-7 and TH samples. In order to determine whether these genes followed a similar pattern of expression during the late stages of berry ripening in a red cultivar, nine genes were selected for RT-PCR analysis with Cabernet Sauvignon grown under two different temperature regimes affecting the precocity of ripening. The expression profiles and their relationship to ripening were confirmed in Cabernet Sauvignon for seven genes, encoding a carotenoid cleavage dioxygenase, a galactinol synthase, a late embryogenesis abundant protein, a dirigent-like protein, a histidine kinase receptor, a valencene synthase and a putative S-adenosyl-L-methionine:salicylic acid carboxyl methyltransferase. Conclusions This set of up- and down-regulated genes characterize the late stages of berry ripening in the two cultivars studied, and are indirectly linked to wine quality. They might be used directly or indirectly to design immunological, biochemical or molecular tools aimed at the determination of optimal ripening in these cultivars. PMID:22098939
Technical Report and Data File User's Manual for the 1992 National Adult Literacy Survey.
ERIC Educational Resources Information Center
Kirsch, Irwin; Yamamoto, Kentaro; Norris, Norma; Rock, Donald; Jungeblut, Ann; O'Reilly, Patricia; Berlin, Martha; Mohadjer, Leyla; Waksberg, Joseph; Goksel, Huseyin; Burke, John; Rieger, Susan; Green, James; Klein, Merle; Campbell, Anne; Jenkins, Lynn; Kolstad, Andrew; Mosenthal, Peter; Baldi, Stephane
Chapter 1 of this report and user's manual describes design and implementation of the 1992 National Adult Literacy Survey (NALS). Chapter 2 reviews stages of sampling for national and state survey components; presents weighted and unweighted response rates for the household component; and describes non-incentive and prison sample designs. Chapter…
Cryogenic Propellant Management Device: Conceptual Design Study
NASA Technical Reports Server (NTRS)
Wollen, Mark; Merino, Fred; Schuster, John; Newton, Christopher
2010-01-01
Concepts of Propellant Management Devices (PMDs) were designed for lunar descent stage reaction control system (RCS) and lunar ascent stage (main and RCS propulsion) missions using liquid oxygen (LO2) and liquid methane (LCH4). Study ground rules set a maximum of 19 days from launch to lunar touchdown, and an additional 210 days on the lunar surface before liftoff. Two PMDs were conceptually designed for each of the descent stage RCS propellant tanks, and two designs for each of the ascent stage main propellant tanks. One of the two PMD types is a traditional partial four-screen channel device. The other type is a novel, expanding volume device which uses a stretched, flexing screen. It was found that several unique design features simplified the PMD designs. These features are (1) high propellant tank operating pressures, (2) aluminum tanks for propellant storage, and (3) stringent insulation requirements. Consequently, it was possible to treat LO2 and LCH4 as if they were equivalent to Earth-storable propellants because they would remain substantially subcooled during the lunar mission. In fact, prelaunch procedures are simplified with cryogens, because any trapped vapor will condense once the propellant tanks are pressurized in space.
Point estimation following two-stage adaptive threshold enrichment clinical trials.
Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel
2018-05-31
Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Evaluation Study for Secondary Stage EFL Textbook: EFL Teachers' Perspectives
ERIC Educational Resources Information Center
Al Harbi, Abdullah Abdul Muhsen
2017-01-01
This study aimed at evaluating EFL textbook for secondary stage in Saudi Public schools. Participants consisted of (100) male teachers and (73) female teachers teaching secondary stage students in two cities: Madinah and Dowadmi. The tool of the study designed to cover five dimensions: layout and design, the objectives of the textbook, teaching…
Preliminary Design Optimization For A Supersonic Turbine For Rocket Propulsion
NASA Technical Reports Server (NTRS)
Papila, Nilay; Shyy, Wei; Griffin, Lisa; Huber, Frank; Tran, Ken; McConnaughey, Helen (Technical Monitor)
2000-01-01
In this study, we present a method for optimizing, at the preliminary design level, a supersonic turbine for rocket propulsion system application. Single-, two- and three-stage turbines are considered with the number of design variables increasing from 6 to 11 then to 15, in accordance with the number of stages. Due to its global nature and flexibility in handling different types of information, the response surface methodology (RSM) is applied in the present study. A major goal of the present Optimization effort is to balance the desire of maximizing aerodynamic performance and minimizing weight. To ascertain required predictive capability of the RSM, a two-level domain refinement approach has been adopted. The accuracy of the predicted optimal design points based on this strategy is shown to he satisfactory. Our investigation indicates that the efficiency rises quickly from single stage to 2 stages but that the increase is much less pronounced with 3 stages. A 1-stage turbine performs poorly under the engine balance boundary condition. A portion of fluid kinetic energy is lost at the turbine discharge of the 1-stage design due to high stage pressure ratio and high-energy content, mostly hydrogen, of the working fluid. Regarding the optimization technique, issues related to the design of experiments (DOE) has also been investigated. It is demonstrated that the criteria for selecting the data base exhibit significant impact on the efficiency and effectiveness of the construction of the response surface.
Computer Design Technology of the Small Thrust Rocket Engines Using CAE / CAD Systems
NASA Astrophysics Data System (ADS)
Ryzhkov, V.; Lapshin, E.
2018-01-01
The paper presents an algorithm for designing liquid small thrust rocket engine, the process of which consists of five aggregated stages with feedback. Three stages of the algorithm provide engineering support for design, and two stages - the actual engine design. A distinctive feature of the proposed approach is a deep study of the main technical solutions at the stage of engineering analysis and interaction with the created knowledge (data) base, which accelerates the process and provides enhanced design quality. The using multifunctional graphic package Siemens NX allows to obtain the final product -rocket engine and a set of design documentation in a fairly short time; the engine design does not require a long experimental development.
Influence of the implant-abutment connection design and diameter on the screw joint stability.
Shin, Hyon-Mo; Huh, Jung-Bo; Yun, Mi-Jeong; Jeon, Young-Chan; Chang, Brian Myung; Jeong, Chang-Mo
2014-04-01
This study was conducted to evaluate the influence of the implant-abutment connection design and diameter on the screw joint stability. Regular and wide-diameter implant systems with three different joint connection designs: an external butt joint, a one-stage internal cone, and a two-stage internal cone were divided into seven groups (n=5, in each group). The initial removal torque values of the abutment screw were measured with a digital torque gauge. The postload removal torque values were measured after 100,000 cycles of a 150 N and a 10 Hz cyclic load had been applied. Subsequently, the rates of the initial and postload removal torque losses were calculated to evaluate the effect of the joint connection design and diameter on the screw joint stability. Each group was compared using Kruskal-Wallis test and Mann-Whitney U test as post-hoc test (α=0.05). THE POSTLOAD REMOVAL TORQUE VALUE WAS HIGH IN THE FOLLOWING ORDER WITH REGARD TO MAGNITUDE: two-stage internal cone, one-stage internal cone, and external butt joint systems. In the regular-diameter group, the external butt joint and one-stage internal cone systems showed lower postload removal torque loss rates than the two-stage internal cone system. In the wide-diameter group, the external butt joint system showed a lower loss rate than the one-stage internal cone and two-stage internal cone systems. In the two-stage internal cone system, the wide-diameter group showed a significantly lower loss rate than the regular-diameter group (P<.05). The results of this study showed that the external butt joint was more advantageous than the internal cone in terms of the postload removal torque loss. For the difference in the implant diameter, a wide diameter was more advantageous in terms of the torque loss rate.
Influence of the implant-abutment connection design and diameter on the screw joint stability
Shin, Hyon-Mo; Huh, Jung-Bo; Yun, Mi-Jeong; Jeon, Young-Chan; Chang, Brian Myung
2014-01-01
PURPOSE This study was conducted to evaluate the influence of the implant-abutment connection design and diameter on the screw joint stability. MATERIALS AND METHODS Regular and wide-diameter implant systems with three different joint connection designs: an external butt joint, a one-stage internal cone, and a two-stage internal cone were divided into seven groups (n=5, in each group). The initial removal torque values of the abutment screw were measured with a digital torque gauge. The postload removal torque values were measured after 100,000 cycles of a 150 N and a 10 Hz cyclic load had been applied. Subsequently, the rates of the initial and postload removal torque losses were calculated to evaluate the effect of the joint connection design and diameter on the screw joint stability. Each group was compared using Kruskal-Wallis test and Mann-Whitney U test as post-hoc test (α=0.05). RESULTS The postload removal torque value was high in the following order with regard to magnitude: two-stage internal cone, one-stage internal cone, and external butt joint systems. In the regular-diameter group, the external butt joint and one-stage internal cone systems showed lower postload removal torque loss rates than the two-stage internal cone system. In the wide-diameter group, the external butt joint system showed a lower loss rate than the one-stage internal cone and two-stage internal cone systems. In the two-stage internal cone system, the wide-diameter group showed a significantly lower loss rate than the regular-diameter group (P<.05). CONCLUSION The results of this study showed that the external butt joint was more advantageous than the internal cone in terms of the postload removal torque loss. For the difference in the implant diameter, a wide diameter was more advantageous in terms of the torque loss rate. PMID:24843398
Microlenses and microcameras for biomedical imaging
NASA Astrophysics Data System (ADS)
Kanhere, Aditi
Liquid lens technology is a rapidly progressing field driven by the promise of low cost fabrication, faster response, fewer mechanical elements, versatility and ease of customization for different applications. Here we present the use of liquid lenses for biomedical optics and medical imaging. I will specifically focus on our approaches towards the development of two liquid-lens optical systems -- laparoscopic cameras and 3D microscopy. The first part of this work is based on the development of a multi-camera laparoscopic imaging system with tunable focusing capability. The work attempts to find a solution to overcome many of the fundamental challenges faced by current laparoscopic imaging systems. The system is developed upon the key idea that widely spread multiple, tunable microcameras can cover a large range of vantage points and field of view (FoV) for intra-abdominal visualization. Our design features multiple tunable-focus microcameras integrated with a surgical port to provide panoramic intra-abdominal visualization with enhanced depth perception. Our system can be optically tuned to focus in on objects within a range of 5 mm to infinity, with a FoV adjustable between 36 degrees and 130 degrees. This unique approach also eliminates the requirement of an exclusive imaging port and need for navigation of cameras between ports during surgery. The second part of this report focuses on the application of tunable lenses in microscopy. Conventional wide-field microscopy is one of the most widely used optical microscopy technique. This technique typically captures a two dimensional image of a specimen. For a volumetric visualization of the sample or to enable depth scanning along the axial direction, it is necessary to move the sample relative to the fixed focal plane of the microscope objective. For this purpose, a mechanical z-scanning stage is typically employed. The stage enables the focal plane to move through the sample. Typical approaches used to achieve axial scanning are a motorized stepper stage or a piezoelectric stage. While stepper motors offer the advantage of unlimited travel distance, they suffer from hysteresis. Piezoelectric stages on the other hand, help eliminate hysteresis at the cost of the travel distance which is reduced to 100-200 mum. Both the types of stages, however, are bulky and cause vibrations and wobble in the sample due to high inertia. Additional care is required to avoid mechanical overshoots and backlash from the tip touching the sample. Additionally, for water or oil-immersion lenses, vibration of the sample stage can cause disturbance or ripples in the immersion media that can lead to significant distortion in the images. A robust alternative to the use of mechanical scanning stages is a remote focusing system that allows both the objective and the sample to be stationary. One such solution is the employment of a tunable-focus liquid lens in conjunction with a microscope objective to achieve axial scanning through a sample being imaged. Our work demonstrates the implementation of a robust, cost-effective and energy-efficient axial tuning solution for 3D microscopy based on thermo-responsive hydrogel-based tunable liquid lenses.
Two-stage fan. 4: Performance data for stator setting angle optimization
NASA Technical Reports Server (NTRS)
Burger, G. D.; Keenan, M. J.
1975-01-01
Stator setting angle optimization tests were conducted on a two-stage fan to improve efficiency at overspeed, stall margin at design speed, and both efficiency and stall margin at partspeed. The fan has a design pressure ratio of 2.8, a flow rate of 184.2 lb/sec (83.55 kg/sec) and a 1st-stage rotor tip speed of 1450 ft/sec (441.96 in/sec). Performance was obtained at 70,100, and 105 percent of design speed with different combinations of 1st-stage and 2nd-stage stator settings. One combination of settings, other than design, was common to all three speeds. At design speed, a 2.0 percentage point increase in stall margin was obtained at the expense of a 1.3 percentage point efficiency decrease. At 105 percent speed, efficiency was improved by 1.8 percentage points but stall margin decreased 4.7 percentage points. At 70 percent speed, no change in stall margin or operating line efficiency was obtained with stator resets although considerable speed-flow requlation occurred.
Overview of the Beta II Two-Stage-To-Orbit vehicle design
NASA Technical Reports Server (NTRS)
Plencner, Robert M.
1991-01-01
A study of a near-term, low risk two-stage-to-orbit (TSTO) vehicle was undertaken. The goal of the study was to assess a fully reusable TSTO vehicle with horizontal takeoff and landing capability that could deliver 10,000 pounds to a 120 nm polar orbit. The configuration analysis was based on the Beta vehicle design. A cooperative study was performed to redesign and refine the Beta concept to meet the mission requirements. The vehicle resulting from this study was named Beta II. It has an all-airbreathing first stage and a staging Mach number of 6.5. The second stage is a conventional wing-body configuration with a single SSME.
Developmental Flight Instrumentation System for the Crew Launch Vehicle
NASA Technical Reports Server (NTRS)
Crawford, Kevin; Thomas, John
2006-01-01
The National Aeronautics and Space Administration is developing a new launch vehicle to replace the Space Shuttle. The Crew Launch Vehicle (CLV) will be a combination of new design hardware and heritage Apollo and Space Shuttle hardware. The current CLV configuration is a 5 segment solid rocket booster first stage and a new upper stage design with a modified Apollo era J-2 engine. The current schedule has two test flights with a first stage and a structurally identical, but without engine, upper stage. Then there will be two more test flights with a full complement of flight hardware. After the completion of the test flights, the first manned flight to the International Space Station is scheduled for late 2012. To verify the CLV's design margins a developmental flight instrumentation (DFI) system is needed. The DFI system will collect environmental and health data from the various CLV subsystem's and either transmit it to the ground or store it onboard for later evaluation on the ground. The CLV consists of 4 major elements: the first stage, the upper stage, the upper stage engine and the integration of the first stage, upper stage and upper stage engine. It is anticipated that each of CLVs elements will have some version of DFI. This paper will discuss a conceptual DFI design for each element and also of an integrated CLV DFI system.
NASA Astrophysics Data System (ADS)
Lu, Xin-Ming
Shallow junction formation made by low energy ion implantation and rapid thermal annealing is facing a major challenge for ULSI (ultra large scale integration) as the line width decreases down to the sub micrometer region. The issues include low beam current, the channeling effect in low energy ion implantation and TED (transient enhanced diffusion) during annealing after ion implantation. In this work, boron containing small cluster ions, such as GeB, SiB and SiB2, was generated by using the SNICS (source of negative ion by cesium sputtering) ion source to implant into Si substrates to form shallow junctions. The use of boron containing cluster ions effectively reduces the boron energy while keeping the energy of the cluster ion beam at a high level. At the same time, it reduces the channeling effect due to amorphization by co-implanted heavy atoms like Ge and Si. Cluster ions have been used to produce 0.65--2keV boron for low energy ion implantation. Two stage annealing, which is a combination of low temperature (550°C) preannealing and high temperature annealing (1000°C), was carried out to anneal the Si sample implanted by GeB, SiBn clusters. The key concept of two-step annealing, that is, the separation of crystal regrowth, point defects removal with dopant activation from dopant diffusion, is discussed in detail. The advantages of the two stage annealing include better lattice structure, better dopant activation and retarded boron diffusion. The junction depth of the two stage annealed GeB sample was only half that of the one-step annealed sample, indicating that TED was suppressed by two stage annealing. Junction depths as small as 30 nm have been achieved by two stage annealing of sample implanted with 5 x 10-4/cm2 of 5 keV GeB at 1000°C for 1 second. The samples were evaluated by SIMS (secondary ion mass spectrometry) profiling, TEM (transmission electron microscopy) and RBS (Rutherford Backscattering Spectrometry)/channeling. Cluster ion implantation in combination with two-step annealing is effective in fabricating ultra-shallow junctions.
The assessment of health policy changes using the time-reversed crossover design.
Sollecito, W A; Gillings, D B
1986-01-01
The time-reversed crossover design is a quasi-experimental design which can be applied to evaluate the impact of a change in health policy on a large population. This design makes use of separate sampling and analysis strategies to improve the validity of conclusions drawn from such an evaluation. The properties of the time-reversed crossover design are presented including the use of stratification on outcome in the sampling stage, which is intended to improve external validity. It is demonstrated that, although this feature of the design introduces internal validity threats due to regression toward the mean in extreme-outcome strata, these effects can be measured and eliminated from the test of significance of treatment effects. Methods for within- and across-stratum estimation and hypothesis-testing are presented which are similar to those which have been developed for the traditional two-period crossover design widely used in clinical trials. The procedures are illustrated using data derived from a study conducted by the United Mine Workers of America Health and Retirement Funds to measure the impact of cost-sharing on health care utilization among members of its health plan. PMID:3081465
NASA Technical Reports Server (NTRS)
Cunnan, W. S.; Stevans, W.; Urasek, D. C.
1978-01-01
The aerodynamic design and the overall and blade-element performances are presented of a 427-meter-per-second-tip-speed two-stage fan designed with axially spaced blade rows to reduce noise transmitted upstream of the fan. At design speed the highest recorded adiabatic efficiency was 0.796 at a pressure of 2.30. Peak efficiency was not established at design speed because of a damper failure which terminated testing prematurely. The overall efficiencies, at 60 and 80 percent of design speed, peaked at approximately 0.83.
Design of impact-resistant boron/aluminum large fan blade
NASA Technical Reports Server (NTRS)
Salemme, C. T.; Yokel, S. A.
1978-01-01
The technical program was comprised of two technical tasks. Task 1 encompassed the preliminary boron/aluminum fan blade design effort. Two preliminary designs were evolved. An initial design consisted of 32 blades per stage and was based on material properties extracted from manufactured blades. A final design of 36 blades per stage was based on rule-of-mixture material properties. In Task 2, the selected preliminary blade design was refined via more sophisticated analytical tools. Detailed finite element stress analysis and aero performance analysis were carried out to determine blade material frequencies and directional stresses.
Sun, Jing; Li, Zhanjiang; Buys, Nicholas; Storch, Eric A
2015-03-15
Youth with obsessive-compulsive disorder (OCD) are at risk of experiencing comorbid psychiatric conditions, such as depression and anxiety. Studies of Chinese adolescents with OCD are limited. The aim of this study was to investigate the association of depression, anxiety, and helplessness with the occurrence of OCD in Chinese adolescents. This study consisted of two stages. The first stage used a cross-sectional design involving a stratified clustered non-clinical sample of 3174 secondary school students. A clinical interview procedure was then employed to diagnose OCD in students who had a Leyton 'yes' score of 15 or above. The second phase used a case-control study design to examine the relationship of OCD to depression, anxiety and helplessness in a matched sample of 288 adolescents with clinically diagnosed OCD and 246 students without OCD. Helplessness, depression and anxiety scores were directly associated with the probability of OCD caseness. Canonical correlation analysis indicated that the OCD correlated significantly with depression, anxiety, and helplessness. Cluster analysis further indicated that the degree of the OCD is also associated with severity of depression and anxiety, and the level of helplessness. These findings suggest that depression, anxiety and helplessness are important correlates of OCD in Chinese adolescents. Future studies using longitudinal and prospective designs are required to confirm these relationships as causal. Copyright © 2014 Elsevier B.V. All rights reserved.
The Sampling Design of the China Family Panel Studies (CFPS)
Xie, Yu; Lu, Ping
2018-01-01
The China Family Panel Studies (CFPS) is an on-going, nearly nationwide, comprehensive, longitudinal social survey that is intended to serve research needs on a large variety of social phenomena in contemporary China. In this paper, we describe the sampling design of the CFPS sample for its 2010 baseline survey and methods for constructing weights to adjust for sampling design and survey nonresponses. Specifically, the CFPS used a multi-stage probability strategy to reduce operation costs and implicit stratification to increase efficiency. Respondents were oversampled in five provinces or administrative equivalents for regional comparisons. We provide operation details for both sampling and weights construction. PMID:29854418
Design analysis of a Helium re-condenser
NASA Astrophysics Data System (ADS)
Muley, P. K.; Bapat, S. L.; Atrey, M. D.
2017-02-01
Modern helium cryostats deploy a cryocooler with a re-condenser at its II stage for in-situ re-condensation of boil-off vapor. The present work is a vital step in the ongoing research work of design of cryocooler based 100 litre helium cryostat with in-situ re-condensation. The cryostat incorporates a two stage Gifford McMahon cryocooler having specified refrigerating capacity of 40 W at 43 K for I stage and 1 W at 4.2 K for II stage. Although design of cryostat ensures thermal load for cryocooler below its specified refrigerating capacity at the second stage, successful in-situ re-condensation depends on proper design of re-condenser which forms the objective of this work. The present work proposes design of helium re-condenser with straight rectangular fins. Fins are analyzed for optimization of thermal performance parameters such as condensation heat transfer coefficient, surface area for heat transfer, re-condensing capacity, efficiency and effectiveness. The present work provides design of re-condenser with 19 integral fins each of 10 mm height and 1.5 mm thickness with a gap of 1.5 mm between two fins, keeping in mind the manufacturing feasibility, having efficiency of 80.96 % and effectiveness of 10.34.
Architecture for a 1-GHz Digital RADAR
NASA Technical Reports Server (NTRS)
Mallik, Udayan
2011-01-01
An architecture for a Direct RF-digitization Type Digital Mode RADAR was developed at GSFC in 2008. Two variations of a basic architecture were developed for use on RADAR imaging missions using aircraft and spacecraft. Both systems can operate with a pulse repetition rate up to 10 MHz with 8 received RF samples per pulse repetition interval, or at up to 19 kHz with 4K received RF samples per pulse repetition interval. The first design describes a computer architecture for a Continuous Mode RADAR transceiver with a real-time signal processing and display architecture. The architecture can operate at a high pulse repetition rate without interruption for an infinite amount of time. The second design describes a smaller and less costly burst mode RADAR that can transceive high pulse repetition rate RF signals without interruption for up to 37 seconds. The burst-mode RADAR was designed to operate on an off-line signal processing paradigm. The temporal distribution of RF samples acquired and reported to the RADAR processor remains uniform and free of distortion in both proposed architectures. The majority of the RADAR's electronics is implemented in digital CMOS (complementary metal oxide semiconductor), and analog circuits are restricted to signal amplification operations and analog to digital conversion. An implementation of the proposed systems will create a 1-GHz, Direct RF-digitization Type, L-Band Digital RADAR--the highest band achievable for Nyquist Rate, Direct RF-digitization Systems that do not implement an electronic IF downsample stage (after the receiver signal amplification stage), using commercially available off-the-shelf integrated circuits.
August, Gerald J.; Piehler, Timothy F.; Bloomquist, Michael L.
2014-01-01
OBJECTIVE The development of adaptive treatment strategies (ATS) represents the next step in innovating conduct problems prevention programs within a juvenile diversion context. Towards this goal, we present the theoretical rationale, associated methods, and anticipated challenges for a feasibility pilot study in preparation for implementing a full-scale SMART (i.e., sequential, multiple assignment, randomized trial) for conduct problems prevention. The role of a SMART design in constructing ATS is presented. METHOD The SMART feasibility pilot study includes a sample of 100 youth (13–17 years of age) identified by law enforcement as early stage offenders and referred for pre-court juvenile diversion programming. Prior data on the sample population detail a high level of ethnic diversity and approximately equal representations of both genders. Within the SMART, youth and their families are first randomly assigned to one of two different brief-type evidence-based prevention programs, featuring parent-focused behavioral management or youth-focused strengths-building components. Youth who do not respond sufficiently to brief first-stage programming will be randomly assigned a second time to either an extended parent- or youth-focused second-stage programming. Measures of proximal intervention response and measures of potential candidate tailoring variables for developing ATS within this sample are detailed. RESULTS Results of the described pilot study will include information regarding feasibility and acceptability of the SMART design. This information will be used to refine a subsequent full-scale SMART. CONCLUSIONS The use of a SMART to develop ATS for prevention will increase the efficiency and effectiveness of prevention programing for youth with developing conduct problems. PMID:25256135
Understanding the Lunar System Architecture Design Space
NASA Technical Reports Server (NTRS)
Arney, Dale C.; Wilhite, Alan W.; Reeves, David M.
2013-01-01
Based on the flexible path strategy and the desire of the international community, the lunar surface remains a destination for future human exploration. This paper explores options within the lunar system architecture design space, identifying performance requirements placed on the propulsive system that performs Earth departure within that architecture based on existing and/or near-term capabilities. The lander crew module and ascent stage propellant mass fraction are primary drivers for feasibility in multiple lander configurations. As the aggregation location moves further out of the lunar gravity well, the lunar lander is required to perform larger burns, increasing the sensitivity to these two factors. Adding an orbit transfer stage to a two-stage lunar lander and using a large storable stage for braking with a one-stage lunar lander enable higher aggregation locations than Low Lunar Orbit. Finally, while using larger vehicles enables a larger feasible design space, there are still feasible scenarios that use three launches of smaller vehicles.
NASA Technical Reports Server (NTRS)
Albritton, L. M.; Redmon, J. W.; Tyler, T. R.
1993-01-01
Seven extravehicular activity (EVA) tools and a tool carrier have been designed and developed by MSFC in order to provide a two fault tolerant system for the transfer orbit stage (TOS) shuttle mission. The TOS is an upper stage booster for delivering payloads to orbits higher than the shuttle can achieve. Payloads are required not to endanger the shuttle even after two failures have occurred. The Airborne Support Equipment (ASE), used in restraining and deploying TOS, does not meet this criteria. The seven EVA tools designed will provide the required redundancy with no impact to the TOS hardware.
Study of CFB Simulation Model with Coincidence at Multi-Working Condition
NASA Astrophysics Data System (ADS)
Wang, Z.; He, F.; Yang, Z. W.; Li, Z.; Ni, W. D.
A circulating fluidized bed (CFB) two-stage simulation model was developed. To realize the model results coincident with the design value or real operation value at specified multi-working conditions and with capability of real-time calculation, only the main key processes were taken into account and the dominant factors were further abstracted out of these key processes. The simulation results showed a sound accordance at multi-working conditions, and confirmed the advantage of the two-stage model over the original single-stage simulation model. The combustion-support effect of secondary air was investigated using the two-stage model. This model provides a solid platform for investigating the pant-leg structured CFB furnace, which is now under design for a supercritical power plant.
ERIC Educational Resources Information Center
Suprihatin, Krebet; Bin Mohamad Yusof, Hj. Abdul Raheem
2015-01-01
This study aims to evaluate the practice of academic quality assurance in design model based on seven aspects of quality are: curriculum design, teaching and learning, student assessment, student selection, support services, learning resources, and continuous improvement. The design study was conducted in two stages. The first stage is to obtain…
Computer program for preliminary design analysis of axial-flow turbines
NASA Technical Reports Server (NTRS)
Glassman, A. J.
1972-01-01
The program method is based on a mean-diameter flow analysis. Input design requirements include power or pressure ratio, flow, temperature, pressure, and speed. Turbine designs are generated for any specified number of stages and for any of three types of velocity diagrams (symmetrical, zero exit swirl, or impulse). Exit turning vanes can be included in the design. Program output includes inlet and exit annulus dimensions, exit temperature and pressure, total and static efficiencies, blading angles, and last-stage critical velocity ratios. The report presents the analysis method, a description of input and output with sample cases, and the program listing.
Strutjet-powered reusable launch vehicles
NASA Technical Reports Server (NTRS)
Siebenhaar, A.; Bulman, M. J.; Sasso, S. E.; Schnackel, J. A.
1994-01-01
Martin Marietta and Aerojet are co-investigating the feasibility and viability of reusable launch vehicle designs. We are assessing two vehicle concepts, each delivering 8000 lb to a geosynchronous transfer orbit (GTO). Both accomplish this task as a two-state system. The major difference between the two concepts is staging. The first concept, the two-stage-to-orbit (TSTO) system, stages at about 16 kft/sec, allowing immediate return of the first stage to the launch site using its airbreathing propulsion system for a powered cruise flight. The second concept, the single-stage-to-orit (SSTO) system, accomplishes stage separation in a stable low earth orbit (LEO).
Two stage low noise advanced technology fan. 1: Aerodynamic, structural, and acoustic design
NASA Technical Reports Server (NTRS)
Messenger, H. E.; Ruschak, J. T.; Sofrin, T. G.
1974-01-01
A two-stage fan was designed to reduce noise 20 db below current requirements. The first-stage rotor has a design tip speed of 365.8 m/sec and a hub/tip ratio of 0.4. The fan was designed to deliver a pressure ratio of 1.9 with an adiabatic efficiency of 85.3 percent at a specific inlet corrected flow of 209.2kg/sec/sq m. Noise reduction devices include acoustically treated casing walls, a flowpath exit acoustic splitter, a translating centerbody sonic inlet device, widely spaced blade rows, and the proper ratio of blades and vanes. Multiple-circular-arc rotor airfoils, resettable stators, split outer casings, and capability to go to close blade-row spacing are also included.
Dols, W. Stuart; Persily, Andrew K.; Morrow, Jayne B.; Matzke, Brett D.; Sego, Landon H.; Nuffer, Lisa L.; Pulsipher, Brent A.
2010-01-01
In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones. PMID:27134782
Dols, W Stuart; Persily, Andrew K; Morrow, Jayne B; Matzke, Brett D; Sego, Landon H; Nuffer, Lisa L; Pulsipher, Brent A
2010-01-01
In an effort to validate and demonstrate response and recovery sampling approaches and technologies, the U.S. Department of Homeland Security (DHS), along with several other agencies, have simulated a biothreat agent release within a facility at Idaho National Laboratory (INL) on two separate occasions in the fall of 2007 and the fall of 2008. Because these events constitute only two realizations of many possible scenarios, increased understanding of sampling strategies can be obtained by virtually examining a wide variety of release and dispersion scenarios using computer simulations. This research effort demonstrates the use of two software tools, CONTAM, developed by the National Institute of Standards and Technology (NIST), and Visual Sample Plan (VSP), developed by Pacific Northwest National Laboratory (PNNL). The CONTAM modeling software was used to virtually contaminate a model of the INL test building under various release and dissemination scenarios as well as a range of building design and operation parameters. The results of these CONTAM simulations were then used to investigate the relevance and performance of various sampling strategies using VSP. One of the fundamental outcomes of this project was the demonstration of how CONTAM and VSP can be used together to effectively develop sampling plans to support the various stages of response to an airborne chemical, biological, radiological, or nuclear event. Following such an event (or prior to an event), incident details and the conceptual site model could be used to create an ensemble of CONTAM simulations which model contaminant dispersion within a building. These predictions could then be used to identify priority area zones within the building and then sampling designs and strategies could be developed based on those zones.
Fuster-RuizdeApodaca, Maria Jose; Laguia, Ana; Molero, Fernando; Toledo, Javier; Arrillaga, Arantxa; Jaen, Angeles
2017-03-07
The goal of this research is to study the psychosocial determinants of HIV-testing as a function of the decision or change stage concerning this health behavior. The determinants considered in the major ongoing health models and the stages contemplated in the Precaution Adoption Process Model are analysed. A cross-sectional survey was administered to 1,554 people over 16 years of age living in Spain by a computer-assisted telephone interview (CATI). The sample design was randomised, with quotas of sex and age. The survey measured various psychosocial determinants of health behaviors considered in the main cognitive theories, the interviewees' stage of change concerning HIV-testing (lack of awareness, decision not to act, decision to act, action, maintenance, and abandonment), and the signal for the action of getting tested or the perceived barriers to being tested. Approximately two thirds of the population had not ever had the HIV test. The predominant stage was lack of awareness. The most frequently perceived barriers to testing were related to the health system and to the stigma. We also found that the psychosocial determinants studied differed depending on the respondents' stage of change. Perception of risk, perceived self-efficacy, proximity to people who had been tested, perceived benefits of knowing the diagnosis, and a positive instrumental and emotional attitude were positively associated with the decision and maintenance of testing behavior. However, unrealistic underestimation of the risk of HIV infection, stereotypes about the infection, and the perceived severity of HIV were associated with the decision not to be tested. There are various sociocognitive and motivational profiles depending on people's decision stage concerning HIV-testing. Knowing this profile may allow us to design interventions to influence the psychosocial determinants that characterise each stage of change.
Twinn, S
1997-08-01
Although the complexity of undertaking qualitative research with non-English speaking informants has become increasingly recognized, few empirical studies exist which explore the influence of translation on the findings of the study. The aim of this exploratory study was therefore to examine the influence of translation on the reliability and validity of the findings of a qualitative research study. In-depth interviews were undertaken in Cantonese with a convenience sample of six women to explore their perceptions of factors influencing their uptake of Pap smears. Data analysis involved three stages. The first stage involved the translation and transcription of all the interviews into English independently by two translators as well as transcription into Chinese by a third researcher. The second stage involved content analysis of the three data sets to develop categories and themes and the third stage involved a comparison of the categories and themes generated from the Chinese and English data sets. Despite no significant differences in the major categories generated from the Chinese and English data, some minor differences were identified in the themes generated from the data. More significantly the results of the study demonstrated some important issues to consider when using translation in qualitative research, in particular the complexity of managing data when no equivalent word exists in the target language and the influence of the grammatical style on the analysis. In addition the findings raise questions about the significance of the conceptual framework of the research design and sampling to the validity of the study. The importance of using only one translator to maximize the reliability of the study was also demonstrated. In addition the author suggests the findings demonstrate particular problems in using translation in phenomenological research designs.
H.E. Anderson; J. Breidenbach
2007-01-01
Airborne laser scanning (LIDAR) can be a valuable tool in double-sampling forest survey designs. LIDAR-derived forest structure metrics are often highly correlated with important forest inventory variables, such as mean stand biomass, and LIDAR-based synthetic regression estimators have the potential to be highly efficient compared to single-stage estimators, which...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aizikov, Konstantin; Lin, Tzu-Yung; Department of Electrical and Computer Engineering, Boston University, Boston, Massachusetts 02215
The high mass accuracy and resolving power of Fourier transform ion cyclotron resonance mass spectrometers (FT-ICR MS) make them ideal mass detectors for mass spectrometry imaging (MSI), promising to provide unmatched molecular resolution capabilities. The intrinsic low tolerance of FT-ICR MS to RF interference, however, along with typically vertical positioning of the sample, and MSI acquisition speed requirements present numerous engineering challenges in creating robotics capable of achieving the spatial resolution to match. This work discusses a two-dimensional positioning stage designed to address these issues. The stage is capable of operating in {approx}1 x 10{sup -8} mbar vacuum. The rangemore » of motion is set to 100 mm x 100 mm to accommodate large samples, while the positioning accuracy is demonstrated to be less than 0.4 micron in both directions under vertical load over the entire range. This device was integrated into three different matrix assisted laser desorption/ionization (MALDI) FT-ICR instruments and showed no detectable RF noise. The ''oversampling'' MALDI-MSI experiments, under which the sample is completely ablated at each position, followed by the target movement of the distance smaller than the laser beam, conducted on the custom-built 7T FT-ICR MS demonstrate the stability and positional accuracy of the stage robotics which delivers high spatial resolution mass spectral images at a fraction of the laser spot diameter.« less
NASA Technical Reports Server (NTRS)
Brent, J. A.; Clemmons, D. R.
1974-01-01
An experimental investigation was conducted with an 0.8 hub/tip ratio, single-stage, axial flow compressor to determine the potential of tandem-airfoil blading for improving the efficiency and stable operating range of compressor stages. The investigation included testing of a baseline stage with single-airfoil blading and two tandem-blade stages. The overall performance of the baseline stage and the tandem-blade stage with a 20-80% loading split was considerably below the design prediction. The other tandem-blade stage, which had a rotor with a 50-50% loading split, came within 4.5% of the design pressure rise (delta P(bar)/P(bar) sub 1) and matched the design stage efficiency. The baseline stage with single-airfoil blading, which was designed to account for the actual rotor inlet velocity profile and the effects of axial velocity ratio and secondary flow, achieved the design predicted performance. The corresponding tandem-blade stage (50-50% loading split in both blade rows) slightly exceeded the design pressure rise but was 1.5 percentage points low in efficiency. The tandem rotors tested during both phases demonstrated higher pressure rise and efficiency than the corresponding single-airfoil rotor, with identical inlet and exit airfoil angles.
Diniz, Daniel G.; Silva, Geane O.; Naves, Thaís B.; Fernandes, Taiany N.; Araújo, Sanderson C.; Diniz, José A. P.; de Farias, Luis H. S.; Sosthenes, Marcia C. K.; Diniz, Cristovam G.; Anthony, Daniel C.; da Costa Vasconcelos, Pedro F.; Picanço Diniz, Cristovam W.
2016-01-01
It is known that microglial morphology and function are related, but few studies have explored the subtleties of microglial morphological changes in response to specific pathogens. In the present report we quantitated microglia morphological changes in a monkey model of dengue disease with virus CNS invasion. To mimic multiple infections that usually occur in endemic areas, where higher dengue infection incidence and abundant mosquito vectors carrying different serotypes coexist, subjects received once a week subcutaneous injections of DENV3 (genotype III)-infected culture supernatant followed 24 h later by an injection of anti-DENV2 antibody. Control animals received either weekly anti-DENV2 antibodies, or no injections. Brain sections were immunolabeled for DENV3 antigens and IBA-1. Random and systematic microglial samples were taken from the polymorphic layer of dentate gyrus for 3-D reconstructions, where we found intense immunostaining for TNFα and DENV3 virus antigens. We submitted all bi- or multimodal morphological parameters of microglia to hierarchical cluster analysis and found two major morphological phenotypes designated types I and II. Compared to type I (stage 1), type II microglia were more complex; displaying higher number of nodes, processes and trees and larger surface area and volumes (stage 2). Type II microglia were found only in infected monkeys, whereas type I microglia was found in both control and infected subjects. Hierarchical cluster analysis of morphological parameters of 3-D reconstructions of random and systematic selected samples in control and ADE dengue infected monkeys suggests that microglia morphological changes from stage 1 to stage 2 may not be continuous. PMID:27047345
Diniz, Daniel G; Silva, Geane O; Naves, Thaís B; Fernandes, Taiany N; Araújo, Sanderson C; Diniz, José A P; de Farias, Luis H S; Sosthenes, Marcia C K; Diniz, Cristovam G; Anthony, Daniel C; da Costa Vasconcelos, Pedro F; Picanço Diniz, Cristovam W
2016-01-01
It is known that microglial morphology and function are related, but few studies have explored the subtleties of microglial morphological changes in response to specific pathogens. In the present report we quantitated microglia morphological changes in a monkey model of dengue disease with virus CNS invasion. To mimic multiple infections that usually occur in endemic areas, where higher dengue infection incidence and abundant mosquito vectors carrying different serotypes coexist, subjects received once a week subcutaneous injections of DENV3 (genotype III)-infected culture supernatant followed 24 h later by an injection of anti-DENV2 antibody. Control animals received either weekly anti-DENV2 antibodies, or no injections. Brain sections were immunolabeled for DENV3 antigens and IBA-1. Random and systematic microglial samples were taken from the polymorphic layer of dentate gyrus for 3-D reconstructions, where we found intense immunostaining for TNFα and DENV3 virus antigens. We submitted all bi- or multimodal morphological parameters of microglia to hierarchical cluster analysis and found two major morphological phenotypes designated types I and II. Compared to type I (stage 1), type II microglia were more complex; displaying higher number of nodes, processes and trees and larger surface area and volumes (stage 2). Type II microglia were found only in infected monkeys, whereas type I microglia was found in both control and infected subjects. Hierarchical cluster analysis of morphological parameters of 3-D reconstructions of random and systematic selected samples in control and ADE dengue infected monkeys suggests that microglia morphological changes from stage 1 to stage 2 may not be continuous.
NASA Technical Reports Server (NTRS)
Brent, J. A.
1972-01-01
Stage A, comprised of a conventional rotor and stator, was designed and tested to establish a performance baseline for comparison with the results of subsequent tests planned for two tandem-blade stages. The rotor had an inlet hub/tip ratio of 0.8 and a design tip velocity of 757 ft/sec. At design equivalent rotor speed, rotor A achieved a maximum adiabatic efficiency of 85.1 percent at a pressure ratio of 1.29. The stage maximum adiabatic efficiency was 78.6 percent at a pressure ratio of 1.27.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirron, P. J.; Kimball, M. O.; Wegel, D. C.
2010-04-09
NASA/Goddard Space Flight Center has begun developing the Soft X-ray Spectrometer (SXS) instrument that will be flown on the Japanese Astro-H mission. The SXS's 36-pixel detector array will be cooled to 50 mK using a two-stage adiabatic demagnetization refrigerator (ADR). A complicating factor for its design is that the ADR will be integrated into a superfluid helium dewar at 1.3 K that will be coupled to a 1.8 K Joule-Thomson (JT) stage through a heat switch. When liquid helium is present, the coupling will be weak, and the JT stage will act primarily as a shield to reduce parasitic heatmore » loads. When the liquid is depleted, the heat switch will couple more strongly so that the ADR can continue to operate using the JT stage as its heat sink. A two-stage ADR is the most mass efficient option and it has the operational flexibility to work well with a stored cryogen and a cryocooler. The stages are operated independently, and this opens up a very large parameter space for optimizing the design. This paper discusses the optimization process and most relevant trades considered in the design of the SXS ADR, and its expected performance.« less
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
Pojić, Milica; Rakić, Dušan; Lazić, Zivorad
2015-01-01
A chemometric approach was applied for the optimization of the robustness of the NIRS method for wheat quality control. Due to the high number of experimental (n=6) and response variables to be studied (n=7) the optimization experiment was divided into two stages: screening stage in order to evaluate which of the considered variables were significant, and optimization stage to optimize the identified factors in the previously selected experimental domain. The significant variables were identified by using fractional factorial experimental design, whilst Box-Wilson rotatable central composite design (CCRD) was run to obtain the optimal values for the significant variables. The measured responses included: moisture, protein and wet gluten content, Zeleny sedimentation value and deformation energy. In order to achieve the minimal variation in responses, the optimal factor settings were found by minimizing the propagation of error (POE). The simultaneous optimization of factors was conducted by desirability function. The highest desirability of 87.63% was accomplished by setting up experimental conditions as follows: 19.9°C for sample temperature, 19.3°C for ambient temperature and 240V for instrument voltage. Copyright © 2014 Elsevier B.V. All rights reserved.
Use of leaning vanes in a two stage fan
NASA Technical Reports Server (NTRS)
Rao, G. V. R.; Digumarthi, R. V.
1975-01-01
The use of leaning vanes for tone noise reduction was examined in terms of their application in a typical two-stage high pressure ratio fan. In particular for stages designed with outlet guide vanes and zero swirl between stages, leaning the vanes of the first stage stator was studied, since increasing the number of vanes and the gap between stages do not provide the desired advantage. It was shown that noise reduction at higher harmonics of blade passing frequency can be obtained by leaning the vanes.
Mixture-based gatekeeping procedures in adaptive clinical trials.
Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji
2018-01-01
Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.
Samanipour, Saer; Baz-Lomba, Jose A; Alygizakis, Nikiforos A; Reid, Malcolm J; Thomaidis, Nikolaos S; Thomas, Kevin V
2017-06-09
LC-HR-QTOF-MS recently has become a commonly used approach for the analysis of complex samples. However, identification of small organic molecules in complex samples with the highest level of confidence is a challenging task. Here we report on the implementation of a two stage algorithm for LC-HR-QTOF-MS datasets. We compared the performances of the two stage algorithm, implemented via NIVA_MZ_Analyzer™, with two commonly used approaches (i.e. feature detection and XIC peak picking, implemented via UNIFI by Waters and TASQ by Bruker, respectively) for the suspect analysis of four influent wastewater samples. We first evaluated the cross platform compatibility of LC-HR-QTOF-MS datasets generated via instruments from two different manufacturers (i.e. Waters and Bruker). Our data showed that with an appropriate spectral weighting function the spectra recorded by the two tested instruments are comparable for our analytes. As a consequence, we were able to perform full spectral comparison between the data generated via the two studied instruments. Four extracts of wastewater influent were analyzed for 89 analytes, thus 356 detection cases. The analytes were divided into 158 detection cases of artificial suspect analytes (i.e. verified by target analysis) and 198 true suspects. The two stage algorithm resulted in a zero rate of false positive detection, based on the artificial suspect analytes while producing a rate of false negative detection of 0.12. For the conventional approaches, the rates of false positive detection varied between 0.06 for UNIFI and 0.15 for TASQ. The rates of false negative detection for these methods ranged between 0.07 for TASQ and 0.09 for UNIFI. The effect of background signal complexity on the two stage algorithm was evaluated through the generation of a synthetic signal. We further discuss the boundaries of applicability of the two stage algorithm. The importance of background knowledge and experience in evaluating the reliability of results during the suspect screening was evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.
Stage Structure of Moral Development: A Comparison of Alternative Models.
ERIC Educational Resources Information Center
Hau, Kit-Tai
This study evaluated the stage structure of several quasi-simplex and non-simplex models of moral development in two domains of moral development in a British and a Chinese sample. Analyses were based on data reported by Sachs (1992): the Chinese sample consisted of 1,005 students from grade 9 to post-college, and the British sample consisted of…
The application of welat latino for creating paes in solo wedding bride
NASA Astrophysics Data System (ADS)
Ihsani, Ade Novi Nurul; Krisnawati, Maria; Prasetyaningtyas, Wulansari; Anggraeni, Puput; Bela, Herlina Tria; Zunaedah, Putri Wahyu
2018-03-01
The purposes of this research were: 1) to find out the process of creating innovative welat, 2) to find out how to use innovative welat for Solo wedding bride paes creation. The method used in the research was research and development (R & D). Sampling technique in this research was purposive sampling by using 13 people as models. The data collection technique used observation and documentation. Data analysis technique used descriptive technique. The results of the study showed that 1) there were two times design change of the validity of welat creation, each product passed through several stages of designing, forming, determining the material and printing, 3) the first way of using the welat determined the distance dot between the cengkorongan of both forms by using welat according to the existed mold. In conclusion, Innovative welat can produce paes in accordance with the standard and shorten the process.
Optimising cluster survey design for planning schistosomiasis preventive chemotherapy
Sturrock, Hugh J. W.; Turner, Hugo; Whitton, Jane M.; Gower, Charlotte M.; Jemu, Samuel; Phillips, Anna E.; Meite, Aboulaye; Thomas, Brent; Kollie, Karsor; Thomas, Catherine; Rebollo, Maria P.; Styles, Ben; Clements, Michelle; Fenwick, Alan; Harrison, Wendy E.; Fleming, Fiona M.
2017-01-01
Background The cornerstone of current schistosomiasis control programmes is delivery of praziquantel to at-risk populations. Such preventive chemotherapy requires accurate information on the geographic distribution of infection, yet the performance of alternative survey designs for estimating prevalence and converting this into treatment decisions has not been thoroughly evaluated. Methodology/Principal findings We used baseline schistosomiasis mapping surveys from three countries (Malawi, Côte d’Ivoire and Liberia) to generate spatially realistic gold standard datasets, against which we tested alternative two-stage cluster survey designs. We assessed how sampling different numbers of schools per district (2–20) and children per school (10–50) influences the accuracy of prevalence estimates and treatment class assignment, and we compared survey cost-efficiency using data from Malawi. Due to the focal nature of schistosomiasis, up to 53% simulated surveys involving 2–5 schools per district failed to detect schistosomiasis in low endemicity areas (1–10% prevalence). Increasing the number of schools surveyed per district improved treatment class assignment far more than increasing the number of children sampled per school. For Malawi, surveys of 15 schools per district and 20–30 children per school reliably detected endemic schistosomiasis and maximised cost-efficiency. In sensitivity analyses where treatment costs and the country considered were varied, optimal survey size was remarkably consistent, with cost-efficiency maximised at 15–20 schools per district. Conclusions/Significance Among two-stage cluster surveys for schistosomiasis, our simulations indicated that surveying 15–20 schools per district and 20–30 children per school optimised cost-efficiency and minimised the risk of under-treatment, with surveys involving more schools of greater cost-efficiency as treatment costs rose. PMID:28552961
Development of a two-stage membrane-based wash-water reclamation subsystem
NASA Technical Reports Server (NTRS)
Mccray, S. B.
1988-01-01
A two-stage membrane-based subsystem was designed and constructed to enable the recycle of wash waters generated in space. The first stage is a fouling-resistant tube-side-feed hollow-fiber ultrafiltration module, and the second stage is a spiral-wound reverse-osmosis module. Throughout long-term tests, the subsystem consistently produced high-quality permeate, processing actual wash water to 95 percent recovery.
A Portable Solid-State Moisture Meter For Agricultural And Food Products
NASA Astrophysics Data System (ADS)
Bull, C. R.; Stafford, J. V.; Weaving, G. S.
1988-10-01
This paper reports on the development of a small, robust, battery operated near infra-red (NIR) reflectance device, designed for rapid on-farm measurement of the moisture content of forage crops without prior sample preparation. It has potential application to other agricultural or food materials. The instrument is based on two light emitting diodes (LEDs), a germanium detector and a control CMOS single chip microcomputer. The meter has been calibrated to give a direct read out of moisture content for 4 common grass varieties at 3 stages of development. The accuracy of a single point measurement on a grass sample is approximately +/- 6% over a range of 40-80% (wet basis). However, the potential accuracy on a homogeous sample may be as goon as 0.15%.
Stages of condom use and decisional balance among college students.
Tung, W-C; Farmer, S; Ding, K; Tung, W-K; Hsu, C-H
2009-09-01
To explore sexual behaviours and condom use and differences in specific items of perceived benefits and barriers to condoms using the Transtheoretical Model (TTM) stages among college students in southern Taiwan. The TTM suggests that individuals in the action or maintenance stage exhibit higher levels of perceived benefits and lower levels of perceived barriers related to condom use than people in the precontemplation, contemplation or preparation stage. This was a descriptive, cross-sectional design with cluster sampling among college students from two universities in southern Taiwan. Participants completed self-administered questionnaires, including demographic data, Sexual History and Condom Use Scale and Condom Use Decisional Balance Scale. Of the 279 participants, 57% were sexually active, of these only 11.9% used condoms consistently. Respondents in the TTM stage of action/maintenance perceived greater benefits in relation to feeling more responsible (P = 0.031) and protecting their partners as well as themselves (P = 0.028), and perceived more barriers in believing that using condom needs to rely on partner's cooperation (P = 0.046) than participants in precontemplation. Participants in precontemplation and contemplation perceived more barriers related to worry about making their partner angry if condoms were used than those in action/maintenance (P = 0.008). Low levels of condom use among Taiwanese college students remain a significant public health concern. HIV prevention programmes for college students in Taiwan may be enhanced if they incorporate readiness to change and perceived benefits and barriers. Future research should include a larger sample with diverse groups.
Design and optimization of a single stage centrifugal compressor for a solar dish-Brayton system
NASA Astrophysics Data System (ADS)
Wang, Yongsheng; Wang, Kai; Tong, Zhiting; Lin, Feng; Nie, Chaoqun; Engeda, Abraham
2013-10-01
According to the requirements of a solar dish-Brayton system, a centrifugal compressor stage with a minimum total pressure ratio of 5, an adiabatic efficiency above 75% and a surge margin more than 12% needs to be designed. A single stage, which consists of impeller, radial vaned diffuser, 90° crossover and two rows of axial stators, was chosen to satisfy this system. To achieve the stage performance, an impeller with a 6:1 total pressure ratio and an adiabatic efficiency of 90% was designed and its preliminary geometry came from an in-house one-dimensional program. Radial vaned diffuser was applied downstream of the impeller. Two rows of axial stators after 90° crossover were added to guide the flow into axial direction. Since jet-wake flow, shockwave and boundary layer separation coexisted in the impeller-diffuser region, optimization on the radius ratio of radial diffuser vane inlet to impeller exit, diffuser vane inlet blade angle and number of diffuser vanes was carried out at design point. Finally, an optimized centrifugal compressor stage fulfilled the high expectations and presented proper performance. Numerical simulation showed that at design point the stage adiabatic efficiency was 79.93% and the total pressure ratio was 5.6. The surge margin was 15%. The performance map including 80%, 90% and 100% design speed was also presented.
Thiol-vinyl systems as shape memory polymers and novel two-stage reactive polymer systems
NASA Astrophysics Data System (ADS)
Nair, Devatha P.
2011-12-01
The focus of this research was to formulate, characterize and tailor the reaction methodologies and material properties of thiol-vinyl systems to develop novel polymer platforms for a range of engineering applications. Thiol-ene photopolymers were demonstrated to exhibit several advantageous characteristics for shape memory polymer systems for a range of biomedical applications. The thiol-ene shape memory polymer systems were tough and flexible as compared to the acrylic control systems with glass transition temperatures between 30 and 40 °C; ideal for actuation at body temperature. The thiol-ene polymers also exhibited excellent shape fixity and a rapid and distinct shape memory actuation response along with free strain recoveries of greater than 96% and constrained stress recoveries of 100%. Additionally, two-stage reactive thiol-acrylate systems were engineered as a polymer platform technology enabling two independent sets of polymer processing and material properties. There are distinct advantages to designing polymer systems that afford two distinct sets of material properties -- an intermediate polymer that would enable optimum handling and processing of the material (stage 1), while maintaining the ability to tune in different, final properties that enable the optimal functioning of the polymeric material (stage 2). To demonstrate the range of applicability of the two-stage reactive systems, three specific applications were demonstrated; shape memory polymers, lithographic impression materials, and optical materials. The thiol-acrylate reactions exhibit a wide range of application versatility due to the range of available thiol and acrylate monomers as well as reaction mechanisms such as Michael Addition reactions and free radical polymerizations. By designing a series of non-stoichiometeric thiol-acrylate systems, a polymer network is initially formed via a base catalyzed 'click' Michael addition reaction. This self-limiting reaction results in a Stage 1 polymer with excess acrylic functional groups within the network. At a later point in time, the photoinitiated, free radical polymerization of the excess acrylic functional groups results in a highly crosslinked, robust material system. By varying the monomers within the system as well as the stoichiometery of thiol to acrylate functional groups, the ability of the two-stage reactive systems to encompass a wide range of properties at the end of both the stage 1 and stage 2 polymerizations was demonstrated. The thiol-acrylate networks exhibited intermediate Stage 1 rubbery moduli and glass transition temperatures that range from 0.5 MPa and -10 ºC to 22 MPa and 22 ºC respectively. The same polymer networks can then attain glass transition temperatures that range from 5 ºC to 195 ºC and rubbery moduli of up to 200 MPa after the subsequent photocure stage. Two-stage reactive polymer composite systems were also formulated and characterized for thermomechanical and mechanical properties. Thermomechanical analysis showed that the fillers resulted in a significant increase in the modulus at both stage 1 and stage 2 polymerizations without a significant change in the glass transition temperatures (Tg). The two-stage reactive matrix composite formed with a hexafunctional acrylate matrix and 20 volume % silica particles showed a 125% increase in stage 1 modulus and 101% increase in stage 2 modulus, when compared with the modulus of the neat matrix. Finally, the two-stage reactive polymeric devices were formulated and designed as orthopedic suture anchors for arthroscopic surgeries and mechanically characterized. The Stage 1 device was designed to exhibit properties ideal for arthroscopic delivery and device placement with glass transition temperatures 25 -- 30 °C and rubbery moduli ˜ 95 MPa. The subsequent photopolymerization generated Stage 2 polymers designed to match the local bone environment with moduli ranging up to 2 GPa. Additionally, pull-out strengths of 140 N were demonstrated and are equivalent to the pull-strengths achieved by other commercially available suture anchors.
Two-stage solar power tower cavity-receiver design and thermal performance analysis
NASA Astrophysics Data System (ADS)
Pang, Liping; Wang, Ting; Li, Ruihua; Yang, Yongping
2017-06-01
New type of two-stage solar power tower cavity-receiver is designed and a calculating procedure of radiation, convection and flow under the Gaussian heat flux is established so as to determine the piping layout and geometries in the receiver I and II and the heat flux distribution in different positions is obtained. Then the main thermal performance on water/steam temperature, steam quality, wall temperature along the typical tubes and pressure drop are specified according to the heat transfer and flow characteristics of two-phase flow. Meanwhile, a series of systematic design process is promoted and analysis on thermal performance of the two receivers is conducted. Results show that this type of two-stage cavity-receivers can minimize the size and reduce the mean temperature of receiver I while raise the average heat flux, thus increase the thermal efficiency of the two receivers; besides, the multiple serpentine tubes from header can make a more uniform distribution of the outlet parameters, preventing wall overheated.
Han, Yang; Hou, Shao-Yang; Ji, Shang-Zhi; Cheng, Juan; Zhang, Meng-Yue; He, Li-Juan; Ye, Xiang-Zhong; Li, Yi-Min; Zhang, Yi-Xuan
2017-11-15
A novel method, real-time reverse transcription PCR (real-time RT-PCR) coupled with probe-melting curve analysis, has been established to detect two kinds of samples within one fluorescence channel. Besides a conventional TaqMan probe, this method employs another specially designed melting-probe with a 5' terminus modification which meets the same label with the same fluorescent group. By using an asymmetric PCR method, the melting-probe is able to detect an extra sample in the melting stage effectively while it almost has little influence on the amplification detection. Thus, this method allows the availability of united employment of both amplification stage and melting stage for detecting samples in one reaction. The further demonstration by simultaneous detection of human immunodeficiency virus (HIV) and hepatitis C virus (HCV) in one channel as a model system is presented in this essay. The sensitivity of detection by real-time RT-PCR coupled with probe-melting analysis was proved to be equal to that detected by conventional real-time RT-PCR. Because real-time RT-PCR coupled with probe-melting analysis can double the detection throughputs within one fluorescence channel, it is expected to be a good solution for the problem of low-throughput in current real-time PCR. Copyright © 2017 Elsevier Inc. All rights reserved.
Lightdrum—Portable Light Stage for Accurate BTF Measurement on Site
Havran, Vlastimil; Hošek, Jan; Němcová, Šárka; Čáp, Jiří; Bittner, Jiří
2017-01-01
We propose a miniaturised light stage for measuring the bidirectional reflectance distribution function (BRDF) and the bidirectional texture function (BTF) of surfaces on site in real world application scenarios. The main principle of our lightweight BTF acquisition gantry is a compact hemispherical skeleton with cameras along the meridian and with light emitting diode (LED) modules shining light onto a sample surface. The proposed device is portable and achieves a high speed of measurement while maintaining high degree of accuracy. While the positions of the LEDs are fixed on the hemisphere, the cameras allow us to cover the range of the zenith angle from 0∘ to 75∘ and by rotating the cameras along the axis of the hemisphere we can cover all possible camera directions. This allows us to take measurements with almost the same quality as existing stationary BTF gantries. Two degrees of freedom can be set arbitrarily for measurements and the other two degrees of freedom are fixed, which provides a tradeoff between accuracy of measurements and practical applicability. Assuming that a measured sample is locally flat and spatially accessible, we can set the correct perpendicular direction against the measured sample by means of an auto-collimator prior to measuring. Further, we have designed and used a marker sticker method to allow for the easy rectification and alignment of acquired images during data processing. We show the results of our approach by images rendered for 36 measured material samples. PMID:28241466
Planning and processing multistage samples with a computer programMUST.
John W. Hazard; Larry E. Stewart
1974-01-01
A computer program was written to handle multistage sampling designs in insect populations. It is, however, general enough to be used for any population where the number of stages does not exceed three. The program handles three types of sampling situations, all of which assume equal probability sampling. Option 1 takes estimates of sample variances, costs, and either...
NASA/GE Energy Efficient Engine low pressure turbine scaled test vehicle performance report
NASA Technical Reports Server (NTRS)
Bridgeman, M. J.; Cherry, D. G.; Pedersen, J.
1983-01-01
The low pressure turbine for the NASA/General Electric Energy Efficient Engine is a highly loaded five-stage design featuring high outer wall slope, controlled vortex aerodynamics, low stage flow coefficient, and reduced clearances. An assessment of the performance of the LPT has been made based on a series of scaled air-turbine tests divided into two phases: Block 1 and Block 2. The transition duct and the first two stages of the turbine were evaluated during the Block 1 phase from March through August 1979. The full five-stage scale model, representing the final integrated core/low spool (ICLS) design and incorporating redesigns of stages 1 and 2 based on Block 1 data analysis, was tested as Block 2 in June through September 1981. Results from the scaled air-turbine tests, reviewed herein, indicate that the five-stage turbine designed for the ICLS application will attain an efficiency level of 91.5 percent at the Mach 0.8/10.67-km (35,000-ft), max-climb design point. This is relative to program goals of 91.1 percent for the ICLS and 91.7 percent for the flight propulsion system (FPS).
The design and evolution of the beta two-stage-to-orbit horizontal takeoff and landing launch system
NASA Technical Reports Server (NTRS)
Burkardt, Leo A.; Norris, Rick B.
1992-01-01
The Beta launch system was originally conceived in 1986 as a horizontal takeoff and landing, fully reusable, two-stage-to-orbit, manned launch vehicle to replace the Shuttle. It was to be capable of delivering a 50,000 lb. payload to low polar orbit. The booster propulsion system consisted of JP fueled turbojets and LH fueled ramjets mounted in pods in an over/under arrangement, and a single LOX/LH fueled SSME rocket. The second stage orbiter, which staged at Mach 8, was powered by an SSME rocket. A major goal was to develop a vehicle design consistent with near term technology. The vehicle design was completed with a GLOW of approximately 2,000,000 lbs. All design goals were met. Since then, interest has shifted to the 10,000 lbs. to low polar orbit payload class. The original Beta was down-sized to meet this payload class. The GLOW of the down-sized vehicle was approximately 1,000,000 lbs. The booster was converted to exclusively air-breathing operation. Because the booster depends on conventional air-breathing propulsion only, the staging Mach number was reduced to 5.5. The orbiter remains an SSME rocket-powered stage.
A Compact Two-Stage 120 W GaN High Power Amplifier for SweepSAR Radar Systems
NASA Technical Reports Server (NTRS)
Thrivikraman, Tushar; Horst, Stephen; Price, Douglas; Hoffman, James; Veilleux, Louise
2014-01-01
This work presents the design and measured results of a fully integrated switched power two-stage GaN HEMT high-power amplifier (HPA) achieving 60% power-added efficiency at over 120Woutput power. This high-efficiency GaN HEMT HPA is an enabling technology for L-band SweepSAR interferometric instruments that enable frequent repeat intervals and high-resolution imagery. The L-band HPA was designed using space-qualified state-of-the-art GaN HEMT technology. The amplifier exhibits over 34 dB of power gain at 51 dBm of output power across an 80 MHz bandwidth. The HPA is divided into two stages, an 8 W driver stage and 120 W output stage. The amplifier is designed for pulsed operation, with a high-speed DC drain switch operating at the pulsed-repetition interval and settles within 200 ns. In addition to the electrical design, a thermally optimized package was designed, that allows for direct thermal radiation to maintain low-junction temperatures for the GaN parts maximizing long-term reliability. Lastly, real radar waveforms are characterized and analysis of amplitude and phase stability over temperature demonstrate ultra-stable operation over temperature using integrated bias compensation circuitry allowing less than 0.2 dB amplitude variation and 2 deg phase variation over a 70 C range.
Koniczek, Martin; Antonuk, Larry E; El-Mohri, Youcef; Liang, Albert K; Zhao, Qihua
2017-07-01
Active matrix flat-panel imagers, which typically incorporate a pixelated array with one a-Si:H thin-film transistor (TFT) per pixel, have become ubiquitous by virtue of many advantages, including large monolithic construction, radiation tolerance, and high DQE. However, at low exposures such as those encountered in fluoroscopy, digital breast tomosynthesis and breast computed tomography, DQE is degraded due to the modest average signal generated per interacting x-ray relative to electronic additive noise levels of ~1000 e, or greater. A promising strategy for overcoming this limitation is to introduce an amplifier into each pixel, referred to as the active pixel (AP) concept. Such circuits provide in-pixel amplification prior to readout as well as facilitate correlated multiple sampling, enhancing signal-to-noise and restoring DQE at low exposures. In this study, a methodology for theoretically investigating the signal and noise performance of imaging array designs is introduced and applied to the case of AP circuits based on low-temperature polycrystalline silicon (poly-Si), a semiconductor suited to manufacture of large area, radiation tolerant arrays. Computer simulations employing an analog circuit simulator and performed in the temporal domain were used to investigate signal characteristics and major sources of electronic additive noise for various pixel amplifier designs. The noise sources include photodiode shot noise and resistor thermal noise, as well as TFT thermal and flicker noise. TFT signal behavior and flicker noise were parameterized from fits to measurements performed on individual poly-Si test TFTs. The performance of three single-stage and three two-stage pixel amplifier designs were investigated under conditions relevant to fluoroscopy. The study assumes a 20 × 20 cm 2 , 150 μm pitch array operated at 30 fps and coupled to a CsI:Tl x-ray converter. Noise simulations were performed as a function of operating conditions, including sampling mode, of the designs. The total electronic additive noise included noise contributions from each circuit component. The total noise results were found to exhibit a strong dependence on circuit design and operating conditions, with TFT flicker noise generally found to be the dominant noise contributor. For the single-stage designs, significantly increasing the size of the source-follower TFT substantially reduced flicker noise - with the lowest total noise found to be ~574 e [rms]. For the two-stage designs, in addition to tuning TFT sizes and introducing a low-pass filter, replacing a p-type TFT with a resistor (under the assumption in the study that resistors make no flicker noise contribution) resulted in significant noise reduction - with the lowest total noise found to be ~336 e [rms]. A methodology based on circuit simulations which facilitates comprehensive explorations of signal and noise characteristics has been developed and applied to the case of poly-Si AP arrays. The encouraging results suggest that the electronic additive noise of such devices can be substantially reduced through judicious circuit design, signal amplification, and multiple sampling. This methodology could be extended to explore the noise performance of arrays employing other pixel circuitry such as that for photon counting as well as other semiconductor materials such as a-Si:H and a-IGZO. © 2017 American Association of Physicists in Medicine.
Cotruta, Bogdan; Gheorghe, Cristian; Iacob, Razvan; Dumbrava, Mona; Radu, Cristina; Bancila, Ion; Becheanu, Gabriel
2017-12-01
Evaluation of severity and extension of gastric atrophy and intestinal metaplasia is recommended to identify subjects with a high risk for gastric cancer. The inter-observer agreement for the assessment of gastric atrophy is reported to be low. The aim of the study was to evaluate the inter-observer agreement for the assessment of severity and extension of gastric atrophy using oriented and unoriented gastric biopsy samples. Furthermore, the quality of biopsy specimens in oriented and unoriented samples was analyzed. A total of 35 subjects with dyspeptic symptoms addressed for gastrointestinal endoscopy that agreed to enter the study were prospectively enrolled. The OLGA/OLGIM gastric biopsies protocol was used. From each subject two sets of biopsies were obtained (four from the antrum, two oriented and two unoriented, two from the gastric incisure, one oriented and one unoriented, four from the gastric body, two oriented and two unoriented). The orientation of the biopsy samples was completed using nitrocellulose filters (Endokit®, BioOptica, Milan, Italy). The samples were blindly examined by two experienced pathologists. Inter-observer agreement was evaluated using kappa statistic for inter-rater agreement. The quality of histopathology specimens taking into account the identification of lamina propria was analyzed in oriented vs. unoriented samples. The samples with detectable lamina propria mucosae were defined as good quality specimens. Categorical data was analyzed using chi-square test and a two-sided p value <0.05 was considered statistically significant. A total of 350 biopsy samples were analyzed (175 oriented / 175 unoriented). The kappa index values for oriented/unoriented OLGA 0/I/II/III and IV stages have been 0.62/0.13, 0.70/0.20, 0.61/0.06, 0.62/0.46, and 0.77/0.50, respectively. For OLGIM 0/I/II/III stages the kappa index values for oriented/unoriented samples were 0.83/0.83, 0.88/0.89, 0.70/0.88 and 0.83/1, respectively. No case of OLGIM IV stage was found in the present case series. Good quality histopathology specimens were described in 95.43% of the oriented biopsy samples, and in 89.14% of the unoriented biopsy samples, respectively (p=0.0275). The orientation of gastric biopsies specimens improves the inter-observer agreement for the assessment of gastric atrophy.
Japan's launch vehicle program update
NASA Astrophysics Data System (ADS)
Tadakawa, Tsuguo
1987-06-01
NASDA is actively engaged in the development of H-I and H-II launch vehicle performance capabilities in anticipation of future mission requirements. The H-I has both two-stage and three-stage versions for medium-altitude and geosynchronous orbits, respectively; the restart capability of the second stage affords considerable mission planning flexibility. The H-II vehicle is a two-stage liquid rocket primary propulsion design employing two solid rocket boosters for secondary power; it is capable of launching two-ton satellites into geosynchronous orbit, and reduces manufacture and launch costs by extensively employing off-the-shelf technology.
Experimental early-stage coalification of a peat sample and a peatified wood sample from Indonesia
Orem, W.H.; Neuzil, S.G.; Lerch, H.E.; Cecil, C.B.
1996-01-01
Experimental coalification of a peat sample and a buried wood sample from domed peat deposits in Indonesia was carried out to examine chemical structural changes in organic matter during early-stage coalification. The experiment (125 C, 408 atm lithostatic pressure, and 177 atm fluid pressure for 75 days) was designed to maintain both lithostatic and fluid pressure on the sample, but allow by-products that may retard coalification to escape. We refer to this design as a geologically open system. Changes in the elemental composition, and 13C NMR and FTIR spectra of the peat and wood after experimental coalification suggest preferential thermal decomposition of O-containing aliphatic organic compounds (probably cellulose) during early-stage coalification. The elemental compositions and 13C NMR spectra of the experimentally coalified peat and wood were generally similar to those of Miocene coal and coalified wood samples from Indonesia. Yields of lignin phenols in the peat and wood samples decreased following experimental coalification; the wood sample exhibited a larger change. Lignin phenol yields from the experimentally coalified peat and wood were comparable to yields of lignin phenols from Miocene Indonesian lignite and coalified wood. Changes in syringyl/vanillyl and p-hydroxy/vanillyl ratios suggest direct demethoxylation as a secondary process to demethylation of methoxyl groups during early coalification, and changes in lignin phenol yields and acid/aldehyde ratios point to a coupling between demethoxylation processes and reactions in the alkyl side chain bonds of the ??-carbon in lignin phenols.
Rafieenia, Razieh; Girotto, Francesca; Peng, Wei; Cossu, Raffaello; Pivato, Alberto; Raga, Roberto; Lavagnolo, Maria Cristina
2017-01-01
Aerobic pre-treatment was applied prior to two-stage anaerobic digestion process. Three different food wastes samples, namely carbohydrate rich, protein rich and lipid rich, were prepared as substrates. Effect of aerobic pre-treatment on hydrogen and methane production was studied. Pre-aeration of substrates showed no positive impact on hydrogen production in the first stage. All three categories of pre-aerated food wastes produced less hydrogen compared to samples without pre-aeration. In the second stage, methane production increased for aerated protein rich and carbohydrate rich samples. In addition, the lag phase for carbohydrate rich substrate was shorter for aerated samples. Aerated protein rich substrate yielded the best results among substrates for methane production, with a cumulative production of approximately 351ml/gVS. With regard to non-aerated substrates, lipid rich was the best substrate for CH 4 production (263ml/gVS). Pre-aerated P substrate was the best in terms of total energy generation which amounted to 9.64kJ/gVS. This study revealed aerobic pre-treatment to be a promising option for use in achieving enhanced substrate conversion efficiencies and CH 4 production in a two-stage AD process, particularly when the substrate contains high amounts of proteins. Copyright © 2016 Elsevier Ltd. All rights reserved.
Development of a Two-Stage Mars Ascent Vehicle Using In-Situ Propellant Production
NASA Technical Reports Server (NTRS)
Paxton, Laurel; Vaughan, David
2014-01-01
Mars Sample Return (MSR) and Mars In-Situ Resource Utilization (ISRU) present two main challenges for the advancement of Mars science. MSR would demonstrate Mars lift-off capability, while ISRU would test the ability to produce fuel and oxidizer using Martian resources, a crucial step for future human missions. A two-stage Mars Ascent Vehicle (MAV) concept was developed to support sample return as well as in-situ propellant production. The MAV would be powered by a solid rocket first stage and a LOX-propane second stage. A liquid second-stage provides higher orbit insertion reliability than a solid second stage as well as a degree of complexity eventually required for manned missions. Propane in particular offers comparable performance to methane without requiring cryogenic storage. The total MAV mass would be 119.9 kg to carry an 11 kg payload to orbit. The feasibility of in-situ fuel and oxidizer production was also examined. Two potential schemes were evaluated for production capability, size and power requirements. The schemes examined utilize CO2 and water as starting blocks to produce LOX and a propane blend. The infrastructure required to fuel and launch the MAV was also explored.
Grabber arm mechanism for the Italian Research Interim Stage (IRIS)
NASA Technical Reports Server (NTRS)
Turci, Edmondo
1987-01-01
Two deployable arms, named grabbers, were designed and manufactured to provide lateral stability of the perigee spinning stage which will be deployed from the Space Shuttle cargo bay. The spinning stage is supported by a spin table on a cradle at its base. The Italian Research Interim Stage (IRIS) is designed to carry satellites of intermediate mass up to 900 kg. The requirements are defined and the mechanism is described. Functional test results are presented.
Homogeneity tests of clustered diagnostic markers with applications to the BioCycle Study
Tang, Liansheng Larry; Liu, Aiyi; Schisterman, Enrique F.; Zhou, Xiao-Hua; Liu, Catherine Chun-ling
2014-01-01
Diagnostic trials often require the use of a homogeneity test among several markers. Such a test may be necessary to determine the power both during the design phase and in the initial analysis stage. However, no formal method is available for the power and sample size calculation when the number of markers is greater than two and marker measurements are clustered in subjects. This article presents two procedures for testing the accuracy among clustered diagnostic markers. The first procedure is a test of homogeneity among continuous markers based on a global null hypothesis of the same accuracy. The result under the alternative provides the explicit distribution for the power and sample size calculation. The second procedure is a simultaneous pairwise comparison test based on weighted areas under the receiver operating characteristic curves. This test is particularly useful if a global difference among markers is found by the homogeneity test. We apply our procedures to the BioCycle Study designed to assess and compare the accuracy of hormone and oxidative stress markers in distinguishing women with ovulatory menstrual cycles from those without. PMID:22733707
Note: A wide temperature range MOKE system with annealing capability.
Chahil, Narpinder Singh; Mankey, G J
2017-07-01
A novel sample stage integrated with a longitudinal MOKE system has been developed for wide temperature range measurements and annealing capabilities in the temperature range 65 K < T < 760 K. The sample stage incorporates a removable platen and copper block with inserted cartridge heater and two thermocouple sensors. It is supported and thermally coupled to a cold finger with two sapphire bars. The sapphire based thermal coupling enables the system to perform at higher temperatures without adversely affecting the cryostat and minimizes thermal drift in position. In this system the hysteresis loops of magnetic samples can be measured simultaneously while annealing the sample in a magnetic field.
Impedance-matched Marx generators
NASA Astrophysics Data System (ADS)
Stygar, W. A.; LeChien, K. R.; Mazarakis, M. G.; Savage, M. E.; Stoltzfus, B. S.; Austin, K. N.; Breden, E. W.; Cuneo, M. E.; Hutsel, B. T.; Lewis, S. A.; McKee, G. R.; Moore, J. K.; Mulville, T. D.; Muron, D. J.; Reisman, D. B.; Sceiford, M. E.; Wisher, M. L.
2017-04-01
We have conceived a new class of prime-power sources for pulsed-power accelerators: impedance-matched Marx generators (IMGs). The fundamental building block of an IMG is a brick, which consists of two capacitors connected electrically in series with a single switch. An IMG comprises a single stage or several stages distributed axially and connected in series. Each stage is powered by a single brick or several bricks distributed azimuthally within the stage and connected in parallel. The stages of a multistage IMG drive an impedance-matched coaxial transmission line with a conical center conductor. When the stages are triggered sequentially to launch a coherent traveling wave along the coaxial line, the IMG achieves electromagnetic-power amplification by triggered emission of radiation. Hence a multistage IMG is a pulsed-power analogue of a laser. To illustrate the IMG approach to prime power, we have developed conceptual designs of two ten-stage IMGs with L C time constants on the order of 100 ns. One design includes 20 bricks per stage, and delivers a peak electrical power of 1.05 TW to a matched-impedance 1.22 -Ω load. The design generates 113 kV per stage and has a maximum energy efficiency of 89%. The other design includes a single brick per stage, delivers 68 GW to a matched-impedance 19 -Ω load, generates 113 kV per stage, and has a maximum energy efficiency of 90%. For a given electrical-power-output time history, an IMG is less expensive and slightly more efficient than a linear transformer driver, since an IMG does not use ferromagnetic cores.
Dzul, Maria C.; Dixon, Philip M.; Quist, Michael C.; Dinsomore, Stephen J.; Bower, Michael R.; Wilson, Kevin P.; Gaines, D. Bailey
2013-01-01
We used variance components to assess allocation of sampling effort in a hierarchically nested sampling design for ongoing monitoring of early life history stages of the federally endangered Devils Hole pupfish (DHP) (Cyprinodon diabolis). Sampling design for larval DHP included surveys (5 days each spring 2007–2009), events, and plots. Each survey was comprised of three counting events, where DHP larvae on nine plots were counted plot by plot. Statistical analysis of larval abundance included three components: (1) evaluation of power from various sample size combinations, (2) comparison of power in fixed and random plot designs, and (3) assessment of yearly differences in the power of the survey. Results indicated that increasing the sample size at the lowest level of sampling represented the most realistic option to increase the survey's power, fixed plot designs had greater power than random plot designs, and the power of the larval survey varied by year. This study provides an example of how monitoring efforts may benefit from coupling variance components estimation with power analysis to assess sampling design.
[Role of creative discussion in the learning of critical reading of scientific articles].
Cobos-Aguilar, Héctor; Viniegra-Velázquez, Leonardo; Pérez-Cortés, Patricia
2011-01-01
To compare two active educational strategies on critical reading (two and three stages) for research learning in medical students. Four groups were conformed in a quasi-experimental design. The medical student group, related to three stages (critical reading guide resolution, creative discussion, group discussion) g1, n = 9 with school marks > 90 and g2, n = 19 with a < 90, respectively. The two-stage groups (guide resolution and group discussion) were conformed by pre-graduate interns, g3, n = 17 and g4, n = 12, who attended social security general hospitals. A validated and consistent survey with 144 items was applied to the four groups before and after educational strategies. Critical reading with its subcomponents: interpretation, judgment and proposal were evaluated with 47, 49 and 48 items, respectively. The case control studies, cohort studies, diagnostic test and clinical trial designs were evaluated. Nonparametric significance tests were performed to compare the groups and their results. A bias calculation was performed for each group. The highest median was obtained by the three-stage groups (g1 and g2) and so were the medians in interpretation, judgment and proposal. The several research design results were higher in the same groups. An active educational strategy with three stages is superior to another with two stages in medical students. It is advisable to perform these activities in goal of better learning in our students.
NASA Astrophysics Data System (ADS)
Irfiana, D.; Utami, R.; Khasanah, L. U.; Manuhara, G. J.
2017-04-01
The purpose of this study was to determine the effect of two stage cinnamon bark oleoresin microcapsules (0%, 0.5% and 1%) on the TPC (Total Plate Count), TBA (thiobarbituric acid), pH, and RGB color (Red, Green, and Blue) of vacuum-packed ground beef during refrigerated storage (at 0, 4, 8, 12, and 16 days). This study showed that the addition of two stage cinnamon bark oleoresin microcapsules affected the quality of vacuum-packed ground beef during 16 days of refrigerated storage. The results showed that the TPC value of the vacuum-packed ground beef sample with the addition 0.5% and 1% microcapsules was lower than the value of control sample. The TPC value of the control sample, sample with additional 0.5% and 1% microcapsules were 5.94; 5.46; and 5.16 log CFU/g respectively. The TBA value of vacuum-packed ground beef were 0.055; 0.041; and 0.044 mg malonaldehyde/kg, resepectively on the 16th day of storage. The addition of two-stage cinnamon bark oleoresin microcapsules could inhibit the growth of microbia and decrease the oxidation process of vacuum-packed ground beef. Moreover, the change of vacuum-packed ground beef pH and RGB color with the addition 0.5% and 1% microcapsules were less than those of the control sample. The addition of 1% microcapsules showed the best effect in preserving the vacuum-packed ground beef.
Computer code for preliminary sizing analysis of axial-flow turbines
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1992-01-01
This mean diameter flow analysis uses a stage average velocity diagram as the basis for the computational efficiency. Input design requirements include power or pressure ratio, flow rate, temperature, pressure, and rotative speed. Turbine designs are generated for any specified number of stages and for any of three types of velocity diagrams (symmetrical, zero exit swirl, or impulse) or for any specified stage swirl split. Exit turning vanes can be included in the design. The program output includes inlet and exit annulus dimensions, exit temperature and pressure, total and static efficiencies, flow angles, and last stage absolute and relative Mach numbers. An analysis is presented along with a description of the computer program input and output with sample cases. The analysis and code presented herein are modifications of those described in NASA-TN-D-6702. These modifications improve modeling rigor and extend code applicability.
2013-01-01
Background As there are limited patients for chronic lymphocytic leukaemia trials, it is important that statistical methodologies in Phase II efficiently select regimens for subsequent evaluation in larger-scale Phase III trials. Methods We propose the screened selection design (SSD), which is a practical multi-stage, randomised Phase II design for two experimental arms. Activity is first evaluated by applying Simon’s two-stage design (1989) on each arm. If both are active, the play-the-winner selection strategy proposed by Simon, Wittes and Ellenberg (SWE) (1985) is applied to select the superior arm. A variant of the design, Modified SSD, also allows the arm with the higher response rates to be recommended only if its activity rate is greater by a clinically-relevant value. The operating characteristics are explored via a simulation study and compared to a Bayesian Selection approach. Results Simulations showed that with the proposed SSD, it is possible to retain the sample size as required in SWE and obtain similar probabilities of selecting the correct superior arm of at least 90%; with the additional attractive benefit of reducing the probability of selecting ineffective arms. This approach is comparable to a Bayesian Selection Strategy. The Modified SSD performs substantially better than the other designs in selecting neither arm if the underlying rates for both arms are desirable but equivalent, allowing for other factors to be considered in the decision making process. Though its probability of correctly selecting a superior arm might be reduced, it still performs reasonably well. It also reduces the probability of selecting an inferior arm. Conclusions SSD provides an easy to implement randomised Phase II design that selects the most promising treatment that has shown sufficient evidence of activity, with available R codes to evaluate its operating characteristics. PMID:23819695
Yap, Christina; Pettitt, Andrew; Billingham, Lucinda
2013-07-03
As there are limited patients for chronic lymphocytic leukaemia trials, it is important that statistical methodologies in Phase II efficiently select regimens for subsequent evaluation in larger-scale Phase III trials. We propose the screened selection design (SSD), which is a practical multi-stage, randomised Phase II design for two experimental arms. Activity is first evaluated by applying Simon's two-stage design (1989) on each arm. If both are active, the play-the-winner selection strategy proposed by Simon, Wittes and Ellenberg (SWE) (1985) is applied to select the superior arm. A variant of the design, Modified SSD, also allows the arm with the higher response rates to be recommended only if its activity rate is greater by a clinically-relevant value. The operating characteristics are explored via a simulation study and compared to a Bayesian Selection approach. Simulations showed that with the proposed SSD, it is possible to retain the sample size as required in SWE and obtain similar probabilities of selecting the correct superior arm of at least 90%; with the additional attractive benefit of reducing the probability of selecting ineffective arms. This approach is comparable to a Bayesian Selection Strategy. The Modified SSD performs substantially better than the other designs in selecting neither arm if the underlying rates for both arms are desirable but equivalent, allowing for other factors to be considered in the decision making process. Though its probability of correctly selecting a superior arm might be reduced, it still performs reasonably well. It also reduces the probability of selecting an inferior arm. SSD provides an easy to implement randomised Phase II design that selects the most promising treatment that has shown sufficient evidence of activity, with available R codes to evaluate its operating characteristics.
Galium Electromagnetic (GEM) Thruster Concept and Design
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Markusic, Thomas E.
2005-01-01
We describe the design of a new type of two-stage pulsed electromagnetic accelerator, the gallium electromagnetic (GEM) thruster. A schematic illustration of the GEM thruster concept is given. In this concept, liquid gallium propellant is pumped into the first stage through a porous metal electrode using an electromagnetic pump. At a designated time, a pulsed discharge (approx. 10-50 J) is initiated in the first stage, ablating the liquid gallium from the porous electrode surface and ejecting a dense thermal gallium plasma into the second state. The presence of the gallium plasma in the second stage serves to trigger the high-energy (approx. 500 J), second-stage pulse which provides the primary electromagnetic (j x B) acceleration.
NASA Technical Reports Server (NTRS)
Whitney, W. J.; Behning, F. P.; Moffitt, T. P.; Hotz, G. M.
1980-01-01
The stage group performance of a 4 1/2 stage turbine with an average stage loading factor of 4.66 and high specific work output was determined in cold air at design equivalent speed. The four stage turbine configuration produced design equivalent work output with an efficiency of 0.856; a barely discernible difference from the 0.855 obtained for the complete 4 1/2 stage turbine in a previous investigation. The turbine was designed and the procedure embodied the following design features: (1) controlled vortex flow, (2) tailored radial work distribution, and (3) control of the location of the boundary-layer transition point on the airfoil suction surface. The efficiency forecast for the 4 1/2 stage turbine was 0.886, and the value predicted using a reference method was 0.862. The stage group performance results were used to determine the individual stage efficiencies for the condition at which design 4 1/2 stage work output was obtained. The efficiencies of stages one and four were about 0.020 lower than the predicted value, that of stage two was 0.014 lower, and that of stage three was about equal to the predicted value. Thus all the stages operated reasonably close to their expected performance levels, and the overall (4 1/2 stage) performance was not degraded by any particularly inefficient component.
Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.
Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham
2017-12-01
During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Schmit, Stephanie L.; Stadler, Zsofia K.; Joseph, Vijai; Zhang, Lu; Willis, Joseph E.; Scacheri, Peter; Veigl, Martina; Adams, Mark D.; Raskin, Leon; Sullivan, John F.; Stratton, Kelly; Shia, Jinru; Ellis, Nathan; Rennert, Hedy S.; Manschreck, Christopher; Li, Li; Offit, Kenneth; Elston, Robert C.; Rennert, Gadi; Gruber, Stephen B.
2016-01-01
We tested for germline variants showing association to colon cancer metastasis using a genome-wide association study that compared Ashkenazi Jewish individuals with stage IV metastatic colon cancers versus those with stage I or II non-metastatic colon cancers. In a two-stage study design, we demonstrated significant association to developing metastatic disease for rs60745952, that in Ashkenazi discovery and validation cohorts, respectively, showed an odds ratio (OR) = 2.3 (P = 2.73E-06) and OR = 1.89 (P = 8.05E-04) (exceeding validation threshold of 0.0044). Significant association to metastatic colon cancer was further confirmed by a meta-analysis of rs60745952 in these datasets plus an additional Ashkenazi validation cohort (OR = 1.92; 95% CI: 1.28–2.87), and by a permutation test that demonstrated a significantly longer haplotype surrounding rs60745952 in the stage IV samples. rs60745952, located in an intergenic region on chromosome 4q31.1, and not previously associated with cancer, is, thus, a germline genetic marker for susceptibility to developing colon cancer metastases among Ashkenazi Jews. PMID:26751797
Continuous xylose fermentation by Candida shehatae in a two-stage reactor
M. A. Alexander; T. W. Chapman; T. W. Jeffries
1988-01-01
Recent work has identified ethanol toxicity as a major factor preventing continuous production of ethanol at the concentrations obtainable in batch culture. In this paper we investigate the use of a continuous two-stage bioreactor design to circumvent toxic effects of ethanol. Biomass is produced via continuous culture in the first stage reactor in which ethanol...
Montasser, Mona A; Viana, Grace; Evans, Carla A
2017-04-01
To investigate the presence of secular trends in skeletal maturation of girls and boys as assessed by the use of cervical vertebrae bones. The study compared two main groups: the first included data collected from the Denver growth study (1930s to 1960s) and the second included data collected from recent pretreatment records (1980s to 2010s) of patients from the orthodontic clinic of a North American University. The records from the two groups were all for Caucasian subjects. The sample for each group included 78 lateral cephalographs for girls and the same number for boys. The age of the subjects ranged from 7 to 18 years. Cervical vertebrae maturation (CVM) stages were directly assessed from the radiographs according to the method described by Hassel and Farman in which six CVM stages were designated from cervical vertebrae 2, 3, and 4. The mean age of girls from the Denver growth study and girls from the university clinic in each of the six CVM stages was not different at P ≤0.05. However, the mean age of boys from the two groups was not different only in stage 3 (P = 0.139) and stage 4 (P = 0.211). The results showed no evidence to indicate a tendency for earlier skeletal maturation of girls or boys. Boys in the university group started their skeletal maturation later than boys in the Denver group and completed their maturation earlier. Gender was a significant factor affecting skeletal maturation stages in both Denver and university groups. © The Author 2016. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The model is designed to enable decision makers to compare the economics of geothermal projects with the economics of alternative energy systems at an early stage in the decision process. The geothermal engineering and economic feasibility computer model (GEEF) is written in FORTRAN IV language and can be run on a mainframe or a mini-computer system. An abbreviated version of the model is being developed for usage in conjunction with a programmable desk calculator. The GEEF model has two main segments, namely (i) the engineering design/cost segment and (ii) the economic analysis segment. In the engineering segment, the model determinesmore » the numbers of production and injection wells, heat exchanger design, operating parameters for the system, requirement of supplementary system (to augment the working fluid temperature if the resource temperature is not sufficiently high), and the fluid flow rates. The model can handle single stage systems as well as two stage cascaded systems in which the second stage may involve a space heating application after a process heat application in the first stage.« less
Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm
NASA Technical Reports Server (NTRS)
2005-01-01
This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.
NASA Technical Reports Server (NTRS)
Converse, David
2011-01-01
Fan designs are often constrained by envelope, rotational speed, weight, and power. Aerodynamic performance and motor electrical performance are heavily influenced by rotational speed. The fan used in this work is at a practical limit for rotational speed due to motor performance characteristics, and there is no more space available in the packaging for a larger fan. The pressure rise requirements keep growing. The way to ordinarily accommodate a higher DP is to spin faster or grow the fan rotor diameter. The invention is to put two radially oriented stages on a single disk. Flow enters the first stage from the center; energy is imparted to the flow in the first stage blades, the flow is redirected some amount opposite to the direction of rotation in the fixed stators, and more energy is imparted to the flow in the second- stage blades. Without increasing either rotational speed or disk diameter, it is believed that as much as 50 percent more DP can be achieved with this design than with an ordinary, single-stage centrifugal design. This invention is useful primarily for fans having relatively low flow rates with relatively high pressure rise requirements.
Potential of extended airbreathing operation of a two-stage launch vehicle by scramjet propulsion
NASA Astrophysics Data System (ADS)
Schoettle, U. M.; Hillesheimer, M.; Rahn, M.
This paper examines the application of scramjet propulsion to extend the ramjet operation of an airbreathing two-stage launch designed for horizontal takeoff and landing. Performance comparisons are made for two alternative propulsion concepts. The mission performance predictions presented are obtained from a multistep optimization procedure employing both trajectory optimization and vehicle design steps to achieve maximum payload capabilities. The simulation results are shown to offer an attractive payload advantage of the scramjet variant over the ramjet powered vehicle.
Two-Stage Fan I: Aerodynamic and Mechanical Design
NASA Technical Reports Server (NTRS)
Messenger, H. E.; Kennedy, E. E.
1972-01-01
A two-stage, highly-loaded fan was designed to deliver an overall pressure ratio of 2.8 with an adiabatic efficiency of 83.9 percent. At the first rotor inlet, design flow per unit annulus area is 42 lbm/sec/sq ft (205 kg/sec/sq m), hub/tip ratio is 0.4 with a tip diameter of 31 inches (0.787 m), and design tip speed is 1450 ft/sec (441.96 m/sec). Other features include use of multiple-circular-arc airfoils, resettable stators, and split casings over the rotor tip sections for casing treatment tests.
Mechanisms to deploy the two-stage IUS from the shuttle cargo bay
NASA Technical Reports Server (NTRS)
Haynie, H. T.
1980-01-01
The Inertial Upper Stage (IUS) is a two-stage or three-stage booster used to transport spacecraft from the space shuttle orbit to synchronous orbit or on an interplanetary trajectory. The mechanisms which were designed specifically to perform the two-stage IUS required functions while contained within the cargo bay of the space shuttle during the boost phase and while in a low Earth orbit are discussed. The requirements, configuration, and operation of the mechanisms are described, with particular emphasis on the tilt actuator and the mechanism for decoupling the actuators during boost to eliminate redundant load paths.
Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.
Dai, Tianjiao; Shete, Sanjay
2016-08-30
In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.
Pearce, Jacqueline W; Galle, Laurence E; Kleiboeker, Steve B; Turk, James R; Schommer, Susan K; Dubielizig, Richard R; Mitchell, William J; Moore, Cecil P; Giuliano, Elizabeth A
2007-11-01
Equine recurrent uveitis (ERU) is the most frequent cause of blindness in horses worldwide. Leptospira has been implicated as an etiologic agent in some cases of ERU and has been detected in fresh ocular tissues of affected horses. The objective of this study was to determine the presence of Leptospira antigen and DNA in fixed equine ocular tissues affected with end-stage ERU. Sections of eyes from 30 horses were obtained. Controls included 1) 10 normal equine eyes and 2) 10 equine eyes with a nonrecurrent form of uveitis. The experimental group consisted of 10 eyes diagnosed with ERU based on clinical signs and histologic lesions. Sections were subjected to immunohistochemical staining with an array of rabbit anti-Leptospira polyclonal antibodies. DNA extractions were performed by using a commercial kit designed for fixed tissue. Real-time PCR analysis was completed on extracted DNA. The target sequence for PCR was designed from alignments of available Leptospira 16S rDNA partial sequences obtained from GenBank. Two of 10 test samples were positive for Leptospira antigen by immunohistochemical assay. Zero of 20 controls were positive for Leptospira antigen. All test samples and controls were negative for Leptospira DNA by real-time PCR analysis. Leptospira was detected at a lower frequency than that previously reported for fresh ERU-affected aqueous humor and vitreous samples. Leptospira is not frequently detectable in fixed ocular tissues of horses affected with ERU when using traditional immunohistochemical and real-time PCR techniques.
Sampling model of government travel vouchers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, P.S.; Wright, T.
1987-02-01
A pilot survey was designed and executed to better understand the structure of the universe of all government travel vouchers. Thirteen civilian and military sites were selected for the pilot survey. A total of 3916 travel vouchers with attached tickets were sampled. During the course of the pilot survey, it was felt that the compounding problems of the relative rarity of the expired, unused tickets and the enormously huge universe were too much of an obstacle to overcome in sampling the entire universe (including the US Air Force, US Army, US Navy, US Marines, other Department of Defense offices, andmore » civil) in the first year. The universe was then narrowed to the US Air Force, and US Army which have to two largest government travel expenditures. Based on the results of the pilot survey, ORNL recommends a stratified two-stage cluster sampling model. With probability of 0.90, a sample of size 78 (sites) will be needed to estimate the amounts per airline which will not be more than $50,000 from the true values. This sampling model allows one to estimate the total dollar amounts of expired, unused tickets for individual airlines.« less
A Survey of Extended H_{2} Emission Towards a Sample of Massive YSOs
NASA Astrophysics Data System (ADS)
Navarete, F.; Damineli, A.; Barbosa, C. L.; Blum, R. D.
2014-10-01
Very few massive stars in early formation stages were clearly identified in the Milky Way and moreover, the processes of formation of such objects lacks of observational evidences. Two theories predict the formation of massive star: i) by merging of low mass stars or ii) by an accretion disk. One of the most prominent evidences for the accretion scenario is the presence of bipolar outflows associated to the central sources. Those structures were found on both intermediate and low-mass YSOs, but there are no evidences for associations with MYSOs. Based on that, a survey was designed to investigate the earliest stages of massive star formation through the molecular hydrogen transition at 2.12μm. A sample of ˜ 300 MYSOs candidates was selected from the Red MSX Source program and the sources were observed with the IR cameras Spartan (SOAR, Chile) and WIRCam (CFHT, Hawaii). Extended H_{2} emission was found toward 55% of the sample and 30% of the positive detections (50 sources) have bipolar morphology, suggesting collimated outflows. These results support the accretion scenario, since the merging of low mass stars would not produce jet-like structures.
Mach 6.5 air induction system design for the Beta 2 two-stage-to-orbit booster vehicle
NASA Technical Reports Server (NTRS)
Midea, Anthony C.
1991-01-01
A preliminary, two-dimensional, mixed compression air induction system is designed for the Beta II Two Stage to Orbit booster vehicle to minimize installation losses and efficiently deliver the required airflow. Design concepts, such as an external isentropic compression ramp and a bypass system were developed and evaluated for performance benefits. The design was optimized by maximizing installed propulsion/vehicle system performance. The resulting system design operating characteristics and performance are presented. The air induction system design has significantly lower transonic drag than similar designs and only requires about 1/3 of the bleed extraction. In addition, the design efficiently provides the integrated system required airflow, while maintaining adequate levels of total pressure recovery. The excellent performance of this highly integrated air induction system is essential for the successful completion of the Beta II booster vehicle mission.
Two-axis gimbal for air-to-air and air-to-ground laser communications
NASA Astrophysics Data System (ADS)
Talmor, Amnon G.; Harding, Harvard; Chen, Chien-Chung
2016-03-01
For bi-directional links between high-altitude-platforms (HAPs) and ground, and air-to-air communication between such platforms, a hemispherical +30°C field-of-regard and low-drag low-mass two-axis gimbal was designed and prototyped. The gimbal comprises two servo controlled non-orthogonal elevation over azimuth axis, and inner fast steering mirrors for fine field-of-regard adjustment. The design encompasses a 7.5cm diameter aperture refractive telescope in its elevation stage, folded between two flat mirrors with an exit lens leading to a two mirrors miniature Coude-path fixed to the azimuth stage. Multiple gimbal configurations were traded prior to finalizing a selection that met the requirements. The selected design was manifested onboard a carbon fiber and magnesium composite structure, motorized by custom-built servo motors, and commutated by optical encoders. The azimuth stage is electrically connected to the stationary base via slip ring while the elevation stage made of passive optics. Both axes are aligned by custom-built ceramic-on-steel angular contact duplex bearings, and controlled by embedded electronics featuring a rigid-flex PCB architecture. FEA analysis showed that the design is mechanically robust over a temperature range of +60°C to -80°C, and with first mode of natural frequencies above 400Hz. The total mass of the prototyped gimbal is 3.5kg, including the inner optical bench, which contains fast steering mirrors (FSMs) and tracking sensors. Future version of this gimbal, in prototyping stage, shall weigh less than 3.0kg.
NASA Astrophysics Data System (ADS)
Mohammed, Kamiran Abdulrahman; Arabacı, Muhammed; Önalan, Şükrü
2017-04-01
The aim of this study was to determine the zoonotic bacteria in carp farms in Duhok region of the Northern Iraq. Carp is the main fish species cultured in the Duhok region. The most common zoonotic bacteria generally seen in carp farms are Aeromonas hydrophila, Pseudomonas fluorescens and Streptococcus iniae. Samples were collected from 20 carp farms in the Duhok Region of the Northern Iraq. Six carp samples were collected from each carp farm. Head kidney tissue samples and intestine tissue samples were collected from each carp sample. Than head kidney and intestine tissue samples were pooled. The total bacterial DNA extraction from the pooled each 20 head kidney tissue samples and pooled each 20 intestinal tissue samples. Primers for pathogens were originally designed from 16S Ribosomal gene region. Zoonotic bacteria were scanned in all tissue samples by absent / present analysis in the RT-PCR. After RT-PCR, Capillary gel electrophoresis bands were used for the confirmation of the size of amplicon which was planned during primer designing stage. As a result, one sample was positive in respect to Aeromonas hydrophila, from intestine and one carp farm was positive in respect to Pseudomonas fluorescens from intestine and two carp farms were positive in respect to Streptococcus iniae. Totally 17 of 20 carp farms were negative in respect to the zoonotic bacteria. In conclusion the zoonotic bacteria were very low (15 %) in carp farms from the Duhok Region in the Northern Iraq. Only in one Carp farms, both Aeromonas hydrophila and Pseudomonas fluorescens were positive. Also Streptococcus inia were positive in two carp farms.
Design of High Voltage Electrical Breakdown Strength measuring system at 1.8K with a G-M cryocooler
NASA Astrophysics Data System (ADS)
Li, Jian; Huang, Rongjin; Li, Xu; Xu, Dong; Liu, Huiming; Li, Laifeng
2017-09-01
Impregnating resins as electrical insulation materials for use in ITER magnets and feeder system are required to be radiation stable, good mechanical performance and high voltage electrical breakdown strength. In present ITER project, the breakdown strength need over 30 kV/mm, for future DEMO reactor, it will be greater than this value. In order to develop good property insulation materials to satisfy the requirements of future fusion reactor, high voltage breakdown strength measurement system at low temperature is necessary. In this paper, we will introduce our work on the design of this system. This measuring system has two parts: one is an electrical supply system which provides the high voltage from a high voltage power between two electrodes; the other is a cooling system which consists of a G-M cryocooler, a superfluid chamber and a heat switch. The two stage G-M cryocooler pre-cool down the system to 4K, the superfluid helium pot is used for a container to depress the helium to superfluid helium which cool down the sample to 1.8K and a mechanical heat switch connect or disconnect the cryocooler and the pot. In order to provide the sufficient time for the test, the cooling system is designed to keep the sample at 1.8K for 300 seconds.
Design of a Two-stage High-capacity Stirling Cryocooler Operating below 30K
NASA Astrophysics Data System (ADS)
Wang, Xiaotao; Dai, Wei; Zhu, Jian; Chen, Shuai; Li, Haibing; Luo, Ercang
The high capacity cryocooler working below 30K can find many applications such as superconducting motors, superconducting cables and cryopump. Compared to the GM cryocooler, the Stirling cryocooler can achieve higher efficiency and more compact structure. Because of these obvious advantages, we have designed a two stage free piston Stirling cryocooler system, which is driven by a moving magnet linear compressor with an operating frequency of 40 Hz and a maximum 5 kW input electric power. The first stage of the cryocooler is designed to operate in the liquid nitrogen temperature and output a cooling power of 100 W. And the second stage is expected to simultaneously provide a cooling power of 50 W below the temperature of 30 K. In order to achieve the best system efficiency, a numerical model based on the thermoacoustic model was developed to optimize the system operating and structure parameters.
Energy efficient engine high pressure turbine test hardware detailed design report
NASA Technical Reports Server (NTRS)
Halila, E. E.; Lenahan, D. T.; Thomas, T. T.
1982-01-01
The high pressure turbine configuration for the Energy Efficient Engine is built around a two-stage design system. Moderate aerodynamic loading for both stages is used to achieve the high level of turbine efficiency. Flowpath components are designed for 18,000 hours of life, while the static and rotating structures are designed for 36,000 hours of engine operation. Both stages of turbine blades and vanes are air-cooled incorporating advanced state of the art in cooling technology. Direct solidification (DS) alloys are used for blades and one stage of vanes, and an oxide dispersion system (ODS) alloy is used for the Stage 1 nozzle airfoils. Ceramic shrouds are used as the material composition for the Stage 1 shroud. An active clearance control (ACC) system is used to control the blade tip to shroud clearances for both stages. Fan air is used to impinge on the shroud casing support rings, thereby controlling the growth rate of the shroud. This procedure allows close clearance control while minimizing blade tip to shroud rubs.
NASA Astrophysics Data System (ADS)
Kalabukhov, D. S.; Radko, V. M.; Grigoriev, V. A.
2018-01-01
Ultra-low power turbine drives are used as energy sources in auxiliary power systems, energy units, terrestrial, marine, air and space transport within the confines of shaft power N td = 0.01…10 kW. In this paper we propose a new approach to the development of surrogate models for evaluating the integrated efficiency of multistage ultra-low power impulse turbine with pressure stages. This method is based on the use of existing mathematical models of ultra-low power turbine stage efficiency and mass. It has been used in a method for selecting the rational parameters of two-stage axial ultra-low power turbine. The article describes the basic features of an algorithm for two-stage turbine parameters optimization and for efficiency criteria evaluating. Pledged mathematical models are intended for use at the preliminary design of turbine drive. The optimization method was tested at preliminary design of an air starter turbine. Validation was carried out by comparing the results of optimization calculations and numerical gas-dynamic simulation in the Ansys CFX package. The results indicate a sufficient accuracy of used surrogate models for axial two-stage turbine parameters selection
2014-01-01
Background Cancer detection using sniffer dogs is a potential technology for clinical use and research. Our study sought to determine whether dogs could be trained to discriminate the odour of urine from men with prostate cancer from controls, using rigorous testing procedures and well-defined samples from a major research hospital. Methods We attempted to train ten dogs by initially rewarding them for finding and indicating individual prostate cancer urine samples (Stage 1). If dogs were successful in Stage 1, we then attempted to train them to discriminate prostate cancer samples from controls (Stage 2). The number of samples used to train each dog varied depending on their individual progress. Overall, 50 unique prostate cancer and 67 controls were collected and used during training. Dogs that passed Stage 2 were tested for their ability to discriminate 15 (Test 1) or 16 (Tests 2 and 3) unfamiliar prostate cancer samples from 45 (Test 1) or 48 (Tests 2 and 3) unfamiliar controls under double-blind conditions. Results Three dogs reached training Stage 2 and two of these learnt to discriminate potentially familiar prostate cancer samples from controls. However, during double-blind tests using new samples the two dogs did not indicate prostate cancer samples more frequently than expected by chance (Dog A sensitivity 0.13, specificity 0.71, Dog B sensitivity 0.25, specificity 0.75). The other dogs did not progress past Stage 1 as they did not have optimal temperaments for the sensitive odour discrimination training. Conclusions Although two dogs appeared to have learnt to select prostate cancer samples during training, they did not generalise on a prostate cancer odour during robust double-blind tests involving new samples. Our study illustrates that these rigorous tests are vital to avoid drawing misleading conclusions about the abilities of dogs to indicate certain odours. Dogs may memorise the individual odours of large numbers of training samples rather than generalise on a common odour. The results do not exclude the possibility that dogs could be trained to detect prostate cancer. We recommend that canine olfactory memory is carefully considered in all future studies and rigorous double-blind methods used to avoid confounding effects. PMID:24575737
Lau, Ying; Wang, Wenru
2013-12-01
There is no standardized or formal communication skills training in the current nursing curriculum in Macao, China. To develop and evaluate a learner-centered communication skills training course. Both qualitative and quantitative designs were used in two separate stages. A randomized sample and a convenience sample were taken from students on a four-year bachelor's degree program at a public institute in Macao. Stage I consisted of developing a learner-centered communication skills training course using four focus groups (n=32). Stage II evaluated the training's efficacy by comparing communication skills, clinical interaction, interpersonal dysfunction, and social problem-solving abilities using a quasi-experimental longitudinal pre-post design among 62 nursing students. A course evaluation form was also used. Content analysis was used to evaluate the essential themes in order to develop the specific content and teaching strategies of the course. Paired t-tests and Wilcoxon signed-rank tests showed significant improvement in all post-training scores for communication ability, content of communication, and handling of communication barriers. According to the mean scores of the course evaluation form, students were generally very satisfied with the course: 6.11 to 6.74 on a scale of 1 to 7. This study showed that the course was effective in improving communication skills, especially in terms of the content and the handling of communication barriers. The course filled an important gap in the training needs of nursing students in Macao. The importance of these findings and their implications for nursing education are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.
Soulakova, Julia N; Bright, Brianna C
2013-01-01
A large-sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two-sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk-Koch, and Reduced Falk-Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two-sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates' continuity correction is recommended for balanced designs and the Falk-Koch method with Yates' correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk-Koch method is recommended, although in this case, all methods tend to be over-conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned. Copyright © 2013 John Wiley & Sons, Ltd.
Conceptual Design of a Two Spool Compressor for the NASA Large Civil Tilt Rotor Engine
NASA Technical Reports Server (NTRS)
Veres, Joseph P.; Thurman, Douglas R.
2010-01-01
This paper focuses on the conceptual design of a two spool compressor for the NASA Large Civil Tilt Rotor engine, which has a design-point pressure ratio goal of 30:1 and an inlet weight flow of 30.0 lbm/sec. The compressor notional design requirements of pressure ratio and low-pressure compressor (LPC) and high pressure ratio compressor (HPC) work split were based on a previous engine system study to meet the mission requirements of the NASA Subsonic Rotary Wing Projects Large Civil Tilt Rotor vehicle concept. Three mean line compressor design and flow analysis codes were utilized for the conceptual design of a two-spool compressor configuration. This study assesses the technical challenges of design for various compressor configuration options to meet the given engine cycle results. In the process of sizing, the technical challenges of the compressor became apparent as the aerodynamics were taken into consideration. Mechanical constraints were considered in the study such as maximum rotor tip speeds and conceptual sizing of rotor disks and shafts. The rotor clearance-to-span ratio in the last stage of the LPC is 1.5% and in the last stage of the HPC is 2.8%. Four different configurations to meet the HPC requirements were studied, ranging from a single stage centrifugal, two axi-centrifugals, and all axial stages. Challenges of the HPC design include the high temperature (1,560deg R) at the exit which could limit the maximum allowable peripheral tip speed for centrifugals, and is dependent on material selection. The mean line design also resulted in the definition of the flow path geometry of the axial and centrifugal compressor stages, rotor and stator vane angles, velocity components, and flow conditions at the leading and trailing edges of each blade row at the hub, mean and tip. A mean line compressor analysis code was used to estimate the compressor performance maps at off-design speeds and to determine the required variable geometry reset schedules of the inlet guide vane and variable stators that would result in the transonic stages being aerodynamically matched with high efficiency and acceptable stall margins based on user specified maximum levels of rotor diffusion factor and relative velocity ratio.
NASA Astrophysics Data System (ADS)
Flores, Abiud; Ahuett, Horacio; Song, Gangbing
2006-03-01
Compliant mechanisms have a wide range of application in microassembly, micromanipulation and microsurgery. This article presents a low cost Flexure-Stage actuated by two SMA-wires that produces displacement in one direction in a range from 0 to 10 μm. The Flexure-Stage acts as a mechanical transform by reducing and changing the direction of the SMA actuator output displacement. The Flexure-Stage system has its application in microassembly operation and was built at cost of US$ 35 cost. The design methodology of a flexure-stage from concept design through FEA modeling and finally to construction and characterization is presented in this paper.
An automated two-dimensional optical force clamp for single molecule studies.
Lang, Matthew J; Asbury, Charles L; Shaevitz, Joshua W; Block, Steven M
2002-01-01
We constructed a next-generation optical trapping instrument to study the motility of single motor proteins, such as kinesin moving along a microtubule. The instrument can be operated as a two-dimensional force clamp, applying loads of fixed magnitude and direction to motor-coated microscopic beads moving in vitro. Flexibility and automation in experimental design are achieved by computer control of both the trap position, via acousto-optic deflectors, and the sample position, using a three-dimensional piezo stage. Each measurement is preceded by an initialization sequence, which includes adjustment of bead height relative to the coverslip using a variant of optical force microscopy (to +/-4 nm), a two-dimensional raster scan to calibrate position detector response, and adjustment of bead lateral position relative to the microtubule substrate (to +/-3 nm). During motor-driven movement, both the trap and stage are moved dynamically to apply constant force while keeping the trapped bead within the calibrated range of the detector. We present details of force clamp operation and preliminary data showing kinesin motor movement subject to diagonal and forward loads. PMID:12080136
A Science and Risk-Based Pragmatic Methodology for Blend and Content Uniformity Assessment.
Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Doshi, Chetan
2018-04-01
This paper describes a pragmatic approach that can be applied in assessing powder blend and unit dosage uniformity of solid dose products at Process Design, Process Performance Qualification, and Continued/Ongoing Process Verification stages of the Process Validation lifecycle. The statistically based sampling, testing, and assessment plan was developed due to the withdrawal of the FDA draft guidance for industry "Powder Blends and Finished Dosage Units-Stratified In-Process Dosage Unit Sampling and Assessment." This paper compares the proposed Grouped Area Variance Estimate (GAVE) method with an alternate approach outlining the practicality and statistical rationalization using traditional sampling and analytical methods. The approach is designed to fit solid dose processes assuring high statistical confidence in both powder blend uniformity and dosage unit uniformity during all three stages of the lifecycle complying with ASTM standards as recommended by the US FDA.
Investigation of TESCOM Driveshaft Assembly Failure
1998-10-01
ratio, two-stage axial -flow compressor with a corrected tip speed of 1250 ft/sec at design . The flowpath casing diameter downstream of the inlet... Design of a 1250 ft/sec. Low-Aspect-Ratio, Single-Stage Axial -Flow Compressor , AFAPL-TR-79-2096, Air Force Aero Propulsion Laboratory, Wright...The TESCOM compressor described in this report is a 2.5-stage, low aspect ratio, axial -flow compressor . The performance objectives of this compressor
Hyde, Embriette R.; Haarmann, Daniel P.; Lynne, Aaron M.; Bucheli, Sibyl R.; Petrosino, Joseph F.
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition. PMID:24204941
Hyde, Embriette R; Haarmann, Daniel P; Lynne, Aaron M; Bucheli, Sibyl R; Petrosino, Joseph F
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.
Impact of lunar and planetary missions on the space station: Preliminary STS logistics report
NASA Technical Reports Server (NTRS)
1984-01-01
Space station requirements for lunar and planetary missions are discussed. Specific reference is made to projected Ceres and Kopff missions; Titan probes; Saturn and Mercury orbiters; and a Mars sample return mission. Such requirements as base design; station function; program definition; mission scenarios; uncertainties impact; launch manifest and mission schedule; and shuttle loads are considered. It is concluded that: (1) the impact of the planetary missions on the space station is not large when compared to the lunar base; (2) a quarantine module may be desirable for sample returns; (3) the Ceres and Kopff missions require the ability to stack and checkout two-stage OTVs; and (4) two to seven manweeks of on-orbit work are required of the station crew to launch a mission and, with the exception of the quarantine module, dedicated crew will not be required.
An end-to-end workflow for engineering of biological networks from high-level specifications.
Beal, Jacob; Weiss, Ron; Densmore, Douglas; Adler, Aaron; Appleton, Evan; Babb, Jonathan; Bhatia, Swapnil; Davidsohn, Noah; Haddock, Traci; Loyall, Joseph; Schantz, Richard; Vasilev, Viktor; Yaman, Fusun
2012-08-17
We present a workflow for the design and production of biological networks from high-level program specifications. The workflow is based on a sequence of intermediate models that incrementally translate high-level specifications into DNA samples that implement them. We identify algorithms for translating between adjacent models and implement them as a set of software tools, organized into a four-stage toolchain: Specification, Compilation, Part Assignment, and Assembly. The specification stage begins with a Boolean logic computation specified in the Proto programming language. The compilation stage uses a library of network motifs and cellular platforms, also specified in Proto, to transform the program into an optimized Abstract Genetic Regulatory Network (AGRN) that implements the programmed behavior. The part assignment stage assigns DNA parts to the AGRN, drawing the parts from a database for the target cellular platform, to create a DNA sequence implementing the AGRN. Finally, the assembly stage computes an optimized assembly plan to create the DNA sequence from available part samples, yielding a protocol for producing a sample of engineered plasmids with robotics assistance. Our workflow is the first to automate the production of biological networks from a high-level program specification. Furthermore, the workflow's modular design allows the same program to be realized on different cellular platforms simply by swapping workflow configurations. We validated our workflow by specifying a small-molecule sensor-reporter program and verifying the resulting plasmids in both HEK 293 mammalian cells and in E. coli bacterial cells.
Stehman, S.V.; Wickham, J.D.; Smith, J.H.; Yang, L.
2003-01-01
The accuracy of the 1992 National Land-Cover Data (NLCD) map is assessed via a probability sampling design incorporating three levels of stratification and two stages of selection. Agreement between the map and reference land-cover labels is defined as a match between the primary or alternate reference label determined for a sample pixel and a mode class of the mapped 3×3 block of pixels centered on the sample pixel. Results are reported for each of the four regions comprising the eastern United States for both Anderson Level I and II classifications. Overall accuracies for Levels I and II are 80% and 46% for New England, 82% and 62% for New York/New Jersey (NY/NJ), 70% and 43% for the Mid-Atlantic, and 83% and 66% for the Southeast.
Mohammed, Hlack; Roberts, Daryl L; Copley, Mark; Hammond, Mark; Nichols, Steven C; Mitchell, Jolyon P
2012-09-01
Current pharmacopeial methods for testing dry powder inhalers (DPIs) require that 4.0 L be drawn through the inhaler to quantify aerodynamic particle size distribution of "inhaled" particles. This volume comfortably exceeds the internal dead volume of the Andersen eight-stage cascade impactor (ACI) and Next Generation pharmaceutical Impactor (NGI) as designated multistage cascade impactors. Two DPIs, the second (DPI-B) having similar resistance than the first (DPI-A) were used to evaluate ACI and NGI performance at 60 L/min following the methodology described in the European and United States Pharmacopeias. At sampling times ≥2 s (equivalent to volumes ≥2.0 L), both impactors provided consistent measures of therapeutically important fine particle mass (FPM) from both DPIs, independent of sample duration. At shorter sample times, FPM decreased substantially with the NGI, indicative of incomplete aerosol bolus transfer through the system whose dead space was 2.025 L. However, the ACI provided consistent measures of both variables across the range of sampled volumes evaluated, even when this volume was less than 50% of its internal dead space of 1.155 L. Such behavior may be indicative of maldistribution of the flow profile from the relatively narrow exit of the induction port to the uppermost stage of the impactor at start-up. An explanation of the ACI anomalous behavior from first principles requires resolution of the rapidly changing unsteady flow and pressure conditions at start up, and is the subject of ongoing research by the European Pharmaceutical Aerosol Group. Meanwhile, these experimental findings are provided to advocate a prudent approach by retaining the current pharmacopeial methodology.
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
NASA Astrophysics Data System (ADS)
Chen, Yu-Quan; Ma, Li-Zhen; Wu, Wei; Guan, Ming-Zhi; Wu, Bei-Min; Mei, En-Ming; Xin, Can-Jie
2015-12-01
A conduction-cooled superconducting magnet producing a transverse field of 4 T has been designed for a new generation multi-field coupling measurement system, which will be used to study the mechanical behavior of superconducting samples at cryogenic temperatures and intense magnetic fields. A compact cryostat with a two-stage GM cryocooler is designed and manufactured for the superconducting magnet. The magnet is composed of a pair of flat racetrack coils wound by NbTi/Cu superconducting composite wires, a copper and stainless steel combinational former and two Bi2Sr2CaCu2Oy superconducting current leads. The two coils are connected in series and can be powered with a single power supply. In order to support the high stress and attain uniform thermal distribution in the superconducting magnet, a detailed finite element (FE) analysis has been performed. The results indicate that in the operating status the designed magnet system can sufficiently bear the electromagnetic forces and has a uniform temperature distribution. Supported by National Natural Science Foundation of China (11327802, 11302225), China Postdoctoral Science Foundation (2014M560820) and National Scholarship Foundation of China (201404910172)
Gallium Electromagnetic (GEM) Thrustor Concept and Design
NASA Technical Reports Server (NTRS)
Polzin, Kurt A.; Markusic, Thomas E.
2006-01-01
We describe the design of a new type of two-stage pulsed electromagnetic accelerator, the gallium electromagnetic (GEM) thruster. A schematic illustration of the GEM thruster concept is given in Fig. 1. In this concept, liquid gallium propellant is pumped into the first stage through a porous metal electrode using an electromagneticpump[l]. At a designated time, a pulsed discharge (approx.10-50 J) is initiated in the first stage, ablating the liquid gallium from the porous electrode surface and ejecting a dense thermal gallium plasma into the second state. The presence of the gallium plasma in the second stage serves to trigger the high-energy (approx.500 I), send-stage puke which provides the primary electromagnetic (j x B) acceleration.
Aerodynamic Design Study of Advanced Multistage Axial Compressor
NASA Technical Reports Server (NTRS)
Larosiliere, Louis M.; Wood, Jerry R.; Hathaway, Michael D.; Medd, Adam J.; Dang, Thong Q.
2002-01-01
As a direct response to the need for further performance gains from current multistage axial compressors, an investigation of advanced aerodynamic design concepts that will lead to compact, high-efficiency, and wide-operability configurations is being pursued. Part I of this report describes the projected level of technical advancement relative to the state of the art and quantifies it in terms of basic aerodynamic technology elements of current design systems. A rational enhancement of these elements is shown to lead to a substantial expansion of the design and operability space. Aerodynamic design considerations for a four-stage core compressor intended to serve as a vehicle to develop, integrate, and demonstrate aerotechnology advancements are discussed. This design is biased toward high efficiency at high loading. Three-dimensional blading and spanwise tailoring of vector diagrams guided by computational fluid dynamics (CFD) are used to manage the aerodynamics of the high-loaded endwall regions. Certain deleterious flow features, such as leakage-vortex-dominated endwall flow and strong shock-boundary-layer interactions, were identified and targeted for improvement. However, the preliminary results were encouraging and the front two stages were extracted for further aerodynamic trimming using a three-dimensional inverse design method described in part II of this report. The benefits of the inverse design method are illustrated by developing an appropriate pressure-loading strategy for transonic blading and applying it to reblade the rotors in the front two stages of the four-stage configuration. Multistage CFD simulations based on the average passage formulation indicated an overall efficiency potential far exceeding current practice for the front two stages. Results of the CFD simulation at the aerodynamic design point are interrogated to identify areas requiring additional development. In spite of the significantly higher aerodynamic loadings, advanced CFD-based tools were able to effectively guide the design of a very efficient axial compressor under state-of-the-art aeromechanical constraints.
SLS Block 1-B and Exploration Upper Stage Navigation System Design
NASA Technical Reports Server (NTRS)
Oliver, T. Emerson; Park, Thomas B.; Smith, Austin; Anzalone, Evan; Bernard, Bill; Strickland, Dennis; Geohagan, Kevin; Green, Melissa; Leggett, Jarred
2018-01-01
The SLS Block 1B vehicle is planned to extend NASA's heavy lift capability beyond the initial SLS Block 1 vehicle. The most noticeable change for this vehicle from SLS Block 1 is the swapping of the upper stage from the Interim Cryogenic Propulsion stage (ICPS), a modified Delta IV upper stage, to the more capable Exploration Upper Stage (EUS). As the vehicle evolves to provide greater lift capability and execute more demanding missions so must the SLS Integrated Navigation System to support those missions. The SLS Block 1 vehicle carries two independent navigation systems. The responsibility of the two systems is delineated between ascent and upper stage flight. The Block 1 navigation system is responsible for the phase of flight between the launch pad and insertion into Low-Earth Orbit (LEO). The upper stage system assumes the mission from LEO to payload separation. For the Block 1B vehicle, the two functions are combined into a single system intended to navigate from ground to payload insertion. Both are responsible for self-disposal once payload delivery is achieved. The evolution of the navigation hardware and algorithms from an inertial-only navigation system for Block 1 ascent flight to a tightly coupled GPS-aided inertial navigation system for Block 1-B is described. The Block 1 GN&C system has been designed to meet a LEO insertion target with a specified accuracy. The Block 1-B vehicle navigation system is designed to support the Block 1 LEO target accuracy as well as trans-lunar or trans-planetary injection accuracy. This is measured in terms of payload impact and stage disposal requirements. Additionally, the Block 1-B vehicle is designed to support human exploration and thus is designed to minimize the probability of Loss of Crew (LOC) through high-quality inertial instruments and Fault Detection, Isolation, and Recovery (FDIR) logic. The preliminary Block 1B integrated navigation system design is presented along with the challenges associated with meeting the design objectives. This paper also addresses the design considerations associated with the use of Block 1 and Commercial Off-the-Shelf (COTS) avionics for Block 1-B/EUS as part of an integrated vehicle suite for orbital operations.
Discussion on back-to-back two-stage centrifugal compressor compact design techniques
NASA Astrophysics Data System (ADS)
Huo, Lei; Liu, Huoxing
2013-12-01
Design a small flow back-to-back two-stage centrifugal compressor in the aviation turbocharger, the compressor is compact structure, small axial length, light weighted. Stationary parts have a great influence on their overall performance decline. Therefore, the stationary part of the back-to-back two-stage centrifugal compressor should pay full attention to the diffuser, bend, return vane and volute design. Volute also impact downstream return vane, making the flow in circumferential direction is not uniformed, and several blade angle of attack is drastically changed in downstream of the volute with the airflow can not be rotated to required angle. Loading of high-pressure rotor blades change due to non-uniformed of flow in circumferential direction, which makes individual blade load distribution changed, and affected blade passage load decreased to reduce the capability of work, the tip low speed range increases.
Michaels-Igbokwe, Christine; Lagarde, Mylene; Cairns, John; Terris-Prestholt, Fern
2014-03-01
The process of designing and developing discrete choice experiments (DCEs) is often under reported. The need to adequately report the results of qualitative work used to identify attributes and levels used in a DCE is recognised. However, one area that has received relatively little attention is the exploration of the choice question of interest. This paper provides a case study of the process used to design a stated preference survey to assess youth preferences for integrated sexual and reproductive health (SRH) and HIV outreach services in Malawi. Development and design consisted of six distinct but overlapping and iterative stages. Stage one was a review of the literature. Stage two involved developing a decision map to conceptualise the choice processes involved. Stage three included twelve focus group discussions with young people aged 15-24 (n = 113) and three key informant interviews (n = 3) conducted in Ntcheu District, Malawi. Stage four involved analysis of qualitative data and identification of potential attributes and levels. The choice format and experimental design were selected in stages five and six. The results of the literature review were used to develop a decision map outlining the choices that young people accessing SRH services may face. For youth that would like to use services two key choices were identified: the choice between providers and the choice of service delivery attributes within a provider type. Youth preferences for provider type are best explored using a DCE with a labelled design, while preferences for service delivery attributes associated with a particular provider are better understood using an unlabelled design. Consequently, two DCEs were adopted to jointly assess preferences in this context. Used in combination, the results of the literature review, the decision mapping process and the qualitative work provided robust approach to designing the DCEs individually and as complementary pieces of work. Copyright © 2014 Elsevier Ltd. All rights reserved.
Use of Electronic Health Records in Residential Care Communities
... this stage, and on other aspects of sampling design and data collection, are available elsewhere ( 11 ). Differences among subgroups were ... AJ, Harris-Kojetin LD, Sengupta M, et al. Design and operation of the 2010 National Survey of Residential Care Facilities [PDF 2.10 MB] . ...
Tempia, S; Salman, M D; Keefe, T; Morley, P; Freier, J E; DeMartini, J C; Wamwayi, H M; Njeumi, F; Soumaré, B; Abdi, A M
2010-12-01
A cross-sectional sero-survey, using a two-stage cluster sampling design, was conducted between 2002 and 2003 in ten administrative regions of central and southern Somalia, to estimate the seroprevalence and geographic distribution of rinderpest (RP) in the study area, as well as to identify potential risk factors for the observed seroprevalence distribution. The study was also used to test the feasibility of the spatially integrated investigation technique in nomadic and semi-nomadic pastoral systems. In the absence of a systematic list of livestock holdings, the primary sampling units were selected by generating random map coordinates. A total of 9,216 serum samples were collected from cattle aged 12 to 36 months at 562 sampling sites. Two apparent clusters of RP seroprevalence were detected. Four potential risk factors associated with the observed seroprevalence were identified: the mobility of cattle herds, the cattle population density, the proximity of cattle herds to cattle trade routes and cattle herd size. Risk maps were then generated to assist in designing more targeted surveillance strategies. The observed seroprevalence in these areas declined over time. In subsequent years, similar seroprevalence studies in neighbouring areas of Kenya and Ethiopia also showed a very low seroprevalence of RP or the absence of antibodies against RP. The progressive decline in RP antibody prevalence is consistent with virus extinction. Verification of freedom from RP infection in the Somali ecosystem is currently in progress.
Lunar lander and return propulsion system trade study
NASA Technical Reports Server (NTRS)
Hurlbert, Eric A.; Moreland, Robert; Sanders, Gerald B.; Robertson, Edward A.; Amidei, David; Mulholland, John
1993-01-01
This trade study was initiated at NASA/JSC in May 1992 to develop and evaluate main propulsion system alternatives to the reference First Lunar Outpost (FLO) lander and return-stage transportation system concept. Thirteen alternative configurations were developed to explore the impacts of various combinations of return stage propellants, using either pressure or pump-fed propulsion systems and various staging options. Besides two-stage vehicle concepts, the merits of single-stage and stage-and-a-half options were also assessed in combination with high-performance liquid oxygen and liquid hydrogen propellants. Configurations using an integrated modular cryogenic engine were developed to assess potential improvements in packaging efficiency, mass performance, and system reliability compared to non-modular cryogenic designs. The selection process to evaluate the various designs was the analytic hierarchy process. The trade study showed that a pressure-fed MMH/N2O4 return stage and RL10-based lander stage is the best option for a 1999 launch. While results of this study are tailored to FLO needs, the design date, criteria, and selection methodology are applicable to the design of other crewed lunar landing and return vehicles.
Mass spectrometry and inhomogeneous ion optics
NASA Technical Reports Server (NTRS)
White, F. A.
1973-01-01
Work done in several areas to advance the state of the art of magnetic mass spectrometers is described. The calculations and data necessary for the design of inhomogeneous field mass spectrometers, and the calculation of ion trajectories through such fields are presented. The development and testing of solid state ion detection devices providing the capability of counting single ions is discussed. New techniques in the preparation and operation of thermal-ionization ion sources are described. Data obtained on the concentrations of copper in rainfall and uranium in air samples using the improved thermal ionization techniques are presented. The design of a closed system static mass spectrometer for isotopic analyses is discussed. A summary of instrumental aspects of a four-stage mass spectrometer comprising two electrostatic and two 90 deg. magnetic lenses with a 122-cm radius used to study the interaction of ions with solids is presented.
Impact of Life-Cycle Stage and Gender on the Ability to Balance Work and Family Responsibilities.
ERIC Educational Resources Information Center
Higgins, Christopher; And Others
1994-01-01
Examined impact of gender and life-cycle stage on three components of work-family conflict using sample of 3,616 respondents. For men, levels of work-family conflict were moderately lower in each successive life-cycle stage. For women, levels were similar in two early life-cycle stages but were significantly lower in later life-cycle stage.…
Korekar, Girish; Sharma, Ram Kumar; Kumar, Rahul; Meenu; Bisht, Naveen C; Srivastava, Ravi B; Ahuja, Paramvir Singh; Stobdan, Tsering
2012-05-01
The actinorhizal plant seabuckthorn (Hippophae rhamnoides L., Elaeagnaceae) is a wind pollinated dioecious crop. To distinguish male genotypes from female genotypes early in the vegetative growth phase, we have developed robust PCR-based marker(s). DNA bulk samples from 20 male and 20 female plants each were screened with 60 RAPD primers. Two primers, OPA-04 and OPT-06 consistently amplified female-specific (FS) polymorphic fragments of 1,164 and 868 bp, respectively, that were absent in the male samples. DNA sequence of the two markers did not exhibit significant similarity to previously characterized sequences. A sequence-characterized amplified region marker HrX1 (JQ284019) and HrX2 (JQ284020) designed for the two fragments, continued to amplify the FS allele in 120 female plants but not in 100 male plants tested in the current study. Thus, HrX1 and HrX2 are FS markers that can determine the sex of seabuckthorn plants in an early stage and expedite cultivations for industrial applications.
Groskreutz, Stephen R.; Weber, Stephen G.
2016-01-01
In this work we characterize the development of a method to enhance temperature-assisted on-column solute focusing (TASF) called two-stage TASF. A new instrument was built to implement two-stage TASF consisting of a linear array of three independent, electronically controlled Peltier devices (thermoelectric coolers, TECs). Samples are loaded onto the chromatographic column with the first two TECs, TEC A and TEC B, cold. In the two-stage TASF approach TECs A and B are cooled during injection. TEC A is heated following sample loading. At some time following TEC A’s temperature rise, TEC B’s temperature is increased from the focusing temperature to a temperature matching that of TEC A. Injection bands are focused twice on-column, first on the initial TEC, e.g. single-stage TASF, then refocused on the second, cold TEC. Our goal is to understand the two-stage TASF approach in detail. We have developed a simple yet powerful digital simulation procedure to model the effect of changing temperature in the two focusing zones on retention, band shape and band spreading. The simulation can predict experimental chromatograms resulting from spatial and temporal temperature programs in combination with isocratic and solvent gradient elution. To assess the two-stage TASF method and the accuracy of the simulation well characterized solutes are needed. Thus, retention factors were measured at six temperatures (25–75 °C) at each of twelve mobile phases compositions (0.05–0.60 acetonitrile/water) for homologs of n-alkyl hydroxylbenzoate esters and n-alkyl p-hydroxyphenones. Simulations accurately reflect experimental results in showing that the two-stage approach improves separation quality. For example, two-stage TASF increased sensitivity for a low retention solute by a factor of 2.2 relative to single-stage TASF and 8.8 relative to isothermal conditions using isocratic elution. Gradient elution results for two-stage TASF were more encouraging. Application of two-stage TASF increased peak height for the least retained solute in the test mixture by a factor of 3.2 relative to single-stage TASF and 22.3 compared to isothermal conditions for an injection four-times the column volume. TASF improved resolution and increased peak capacity; for a 12-minute separation peak capacity increased from 75 under isothermal conditions to 146 using single-stage TASF, and 185 for two-stage TASF. PMID:27836226
Groskreutz, Stephen R; Weber, Stephen G
2016-11-25
In this work we characterize the development of a method to enhance temperature-assisted on-column solute focusing (TASF) called two-stage TASF. A new instrument was built to implement two-stage TASF consisting of a linear array of three independent, electronically controlled Peltier devices (thermoelectric coolers, TECs). Samples are loaded onto the chromatographic column with the first two TECs, TEC A and TEC B, cold. In the two-stage TASF approach TECs A and B are cooled during injection. TEC A is heated following sample loading. At some time following TEC A's temperature rise, TEC B's temperature is increased from the focusing temperature to a temperature matching that of TEC A. Injection bands are focused twice on-column, first on the initial TEC, e.g. single-stage TASF, then refocused on the second, cold TEC. Our goal is to understand the two-stage TASF approach in detail. We have developed a simple yet powerful digital simulation procedure to model the effect of changing temperature in the two focusing zones on retention, band shape and band spreading. The simulation can predict experimental chromatograms resulting from spatial and temporal temperature programs in combination with isocratic and solvent gradient elution. To assess the two-stage TASF method and the accuracy of the simulation well characterized solutes are needed. Thus, retention factors were measured at six temperatures (25-75°C) at each of twelve mobile phases compositions (0.05-0.60 acetonitrile/water) for homologs of n-alkyl hydroxylbenzoate esters and n-alkyl p-hydroxyphenones. Simulations accurately reflect experimental results in showing that the two-stage approach improves separation quality. For example, two-stage TASF increased sensitivity for a low retention solute by a factor of 2.2 relative to single-stage TASF and 8.8 relative to isothermal conditions using isocratic elution. Gradient elution results for two-stage TASF were more encouraging. Application of two-stage TASF increased peak height for the least retained solute in the test mixture by a factor of 3.2 relative to single-stage TASF and 22.3 compared to isothermal conditions for an injection four-times the column volume. TASF improved resolution and increased peak capacity; for a 12-min separation peak capacity increased from 75 under isothermal conditions to 146 using single-stage TASF, and 185 for two-stage TASF. Copyright © 2016 Elsevier B.V. All rights reserved.
Richert, Laura; Doussau, Adélaïde; Lelièvre, Jean-Daniel; Arnold, Vincent; Rieux, Véronique; Bouakane, Amel; Lévy, Yves; Chêne, Geneviève; Thiébaut, Rodolphe
2014-02-26
Many candidate vaccine strategies against human immunodeficiency virus (HIV) infection are under study, but their clinical development is lengthy and iterative. To accelerate HIV vaccine development optimised trial designs are needed. We propose a randomised multi-arm phase I/II design for early stage development of several vaccine strategies, aiming at rapidly discarding those that are unsafe or non-immunogenic. We explored early stage designs to evaluate both the safety and the immunogenicity of four heterologous prime-boost HIV vaccine strategies in parallel. One of the vaccines used as a prime and boost in the different strategies (vaccine 1) has yet to be tested in humans, thus requiring a phase I safety evaluation. However, its toxicity risk is considered minimal based on data from similar vaccines. We newly adapted a randomised phase II trial by integrating an early safety decision rule, emulating that of a phase I study. We evaluated the operating characteristics of the proposed design in simulation studies with either a fixed-sample frequentist or a continuous Bayesian safety decision rule and projected timelines for the trial. We propose a randomised four-arm phase I/II design with two independent binary endpoints for safety and immunogenicity. Immunogenicity evaluation at trial end is based on a single-stage Fleming design per arm, comparing the observed proportion of responders in an immunogenicity screening assay to an unacceptably low proportion, without direct comparisons between arms. Randomisation limits heterogeneity in volunteer characteristics between arms. To avoid exposure of additional participants to an unsafe vaccine during the vaccine boost phase, an early safety decision rule is imposed on the arm starting with vaccine 1 injections. In simulations of the design with either decision rule, the risks of erroneous conclusions were controlled <15%. Flexibility in trial conduct is greater with the continuous Bayesian rule. A 12-month gain in timelines is expected by this optimised design. Other existing designs such as bivariate or seamless phase I/II designs did not offer a clear-cut alternative. By combining phase I and phase II evaluations in a multi-arm trial, the proposed optimised design allows for accelerating early stage clinical development of HIV vaccine strategies.
NASA Astrophysics Data System (ADS)
Johansen, T. H.; Feder, J.; Jøssang, T.
1986-06-01
A fully automated apparatus has been designed for measurements of dilatation in solid samples under well-defined thermal conditions. The oven can be thermally stabilized to better than 0.1 mK over a temperature range of -60 to 150 °C using a two-stage control strategy. Coarse control is obtained by heat exchange with a circulating thermal fluid, whereas the fine regulation is based on a solid-state heat pump—a Peltier element, acting as heating and cooling source. The bidirectional action of the Peltier element permits the sample block to be controlled at the average temperature of the surroundings, thus making an essentially adiabatic system with a minimum of thermal gradients in the sample block. The dilatometer cell integrated in the oven assembly is of the parallel plate air capacitor type, and the apparatus has been successfully used with a sensitivity of 0.07 Å. Our system is well suited for measurements near structural phase transitions with a relative resolution of Δt=(T-Tc)/Tc=2×10-7 in temperature and ΔL/L=1×10-9 in strain.
Griffin-Blake, C Shannon; DeJoy, David M
2006-01-01
To compare the effectiveness of stage-matched vs. social-cognitive physical activity interventions in a work setting. Both interventions were designed as minimal-contact, self-help programs suitable for large-scale application. Randomized trial. Participants were randomized into one of the two intervention groups at baseline; the follow-up assessment was conducted 1 month later. A large, public university in the southeastern region of the United States. Employees from two academic colleges within the participating institution were eligible to participate: 366 employees completed the baseline assessment; 208 of these completed both assessments (baseline and follow-up) and met the compliance criteria. Printed, self-help exercise booklets (12 to 16 pages in length) either (1) matched to the individual's stage of motivational readiness for exercise adoption at baseline or (2) derived from social-cognitive theory but not matched by stage. Standard questionnaires were administered to assess stage of motivational readiness for physical activity; physical activity participation; and exercise-related processes of change, decisional balance, self-efficacy, outcome expectancy, and goal satisfaction. The two interventions were equally effective in moving participants to higher levels of motivational readiness for regular physical activity. Among participants not already in maintenance at baseline, 34.9% in the stage-matched condition progressed, while 33.9% in the social-cognitive group did so (chi2 = not significant). Analyses of variance showed that the two treatment groups did not differ in terms of physical activity participation, cognitive and behavioral process use, decisional balance, or the other psychological constructs. For both treatment groups, cognitive process use remained high across all stages, while behavioral process use increased at the higher stages. The pros component of decisional balance did not vary across stage, whereas cons decreased significantly between preparation and action. Minimal-contact, one-shot physical activity interventions delivered at work can help people increase their participation in regular physical activity. Stage matching may not necessarily add value to interventions that otherwise make good use of behavior change theory. The findings also reinforce the importance of barrier reduction in long-term adherence. A limiting factor in this study is that employees in the earliest stage of change (precontemplation) were not well-represented in the sample.
Integrated Circuit Design of 3 Electrode Sensing System Using Two-Stage Operational Amplifier
NASA Astrophysics Data System (ADS)
Rani, S.; Abdullah, W. F. H.; Zain, Z. M.; N, Aqmar N. Z.
2018-03-01
This paper presents the design of a two-stage operational amplifier(op amp) for 3-electrode sensing system readout circuits. The designs have been simulated using 0.13μm CMOS technology from Silterra (Malaysia) with Mentor graphics tools. The purpose of this projects is mainly to design a miniature interfacing circuit to detect the redox reaction in the form of current using standard analog modules. The potentiostat consists of several op amps combined together in order to analyse the signal coming from the 3-electrode sensing system. This op amp design will be used in potentiostat circuit device and to analyse the functionality for each module of the system.
The Study on Mental Health at Work: Design and sampling.
Rose, Uwe; Schiel, Stefan; Schröder, Helmut; Kleudgen, Martin; Tophoven, Silke; Rauch, Angela; Freude, Gabriele; Müller, Grit
2017-08-01
The Study on Mental Health at Work (S-MGA) generates the first nationwide representative survey enabling the exploration of the relationship between working conditions, mental health and functioning. This paper describes the study design, sampling procedures and data collection, and presents a summary of the sample characteristics. S-MGA is a representative study of German employees aged 31-60 years subject to social security contributions. The sample was drawn from the employment register based on a two-stage cluster sampling procedure. Firstly, 206 municipalities were randomly selected from a pool of 12,227 municipalities in Germany. Secondly, 13,590 addresses were drawn from the selected municipalities for the purpose of conducting 4500 face-to-face interviews. The questionnaire covers psychosocial working and employment conditions, measures of mental health, work ability and functioning. Data from personal interviews were combined with employment histories from register data. Descriptive statistics of socio-demographic characteristics and logistic regressions analyses were used for comparing population, gross sample and respondents. In total, 4511 face-to-face interviews were conducted. A test for sampling bias revealed that individuals in older cohorts participated more often, while individuals with an unknown educational level, residing in major cities or with a non-German ethnic background were slightly underrepresented. There is no indication of major deviations in characteristics between the basic population and the sample of respondents. Hence, S-MGA provides representative data for research on work and health, designed as a cohort study with plans to rerun the survey 5 years after the first assessment.
Design and analysis of axial aspirated compressor stages
NASA Astrophysics Data System (ADS)
Merchant, Ali A.
The pressure ratio of axial compressor stages can be significantly increased by controlling the development of blade and endwall boundary layers in regions of adverse pressure gradient by means of boundary layer suction. This concept is validated and demonstrated through the design and analysis of two unique aspirated compressor stages: a low-speed stage with a design pressure ratio of 1.6 at a tip speed of 750 ft/s, and a high-speed stage with a design pressure ratio of 3.5 at a tip speed of 1500 ft/s. The aspirated compressor stages were designed using a new procedure which is a synthesis of low speed and high speed blade design techniques combined with a flexible inverse design method which enabled precise independent control over the shape of the blade suction and pressure surfaces. Integration of the boundary layer suction calculation into the overall design process is an essential ingredient of the new procedure. The blade design system consists of two axisymmetric through-flow codes coupled with a quasi three-dimensional viscous cascade plane code with inverse design capability. Validation of the completed designs were carried out with three-dimensional Euler and Navier-Stokes calculations. A single spanwise slot on the blade suction surface is used to bleed the boundary layer. The suction mass flow requirement for the low-speed and high-speed stages are 1% and 4% of the inlet mass flow, respectively. Additional suction between 1-2% is also required on the compressor endwalls near shock impingement locations. The rotor is modeled with a tip shroud to eliminate tip clearance effects and to discharge the suction flow radially from the flowpath. Three-dimensional viscous evaluation of the designs showed good agreement with the quasi three-dimensional design intent, except in the endwall regions. The suction requirements predicted by the quasi three-dimensional calculation were confirmed by the three-dimensional viscous calculations. The three-dimensional viscous analysis predicted a peak pressure ratio of 1.59 at an isentropic efficiency of 89% for the low-speed stage, and a peak pressure ratio of 3.68 at an isentropic efficiency of 94% for the high-speed rotor. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Overview of Experimental Investigations for Ares I Launch Vehicle Development
NASA Technical Reports Server (NTRS)
Tomek, William G.; Erickson, Gary E.; Pinier, Jeremy T.; Hanke, Jeremy L.
2011-01-01
Another concern for the vehicle during its design trajectory was the separation of the first stage solid rocket booster from the upper stage component after it had depleted its solid fuel propellant. There has been some concern about the interstage of the first stage from clearing the nozzle of the J2-X engine. A detailed separation aerodynamic wind tunnel investigation was conducted in the AEDC VKF Tunnel A to help to investigate the interaction aerodynamic effects5. A comparison of the separation plane details between the Ares I architecture and the Ares I-X demonstration flight architecture is shown in figure 12. The Ares I design requires a more complex separation sequence and requires better control in order to avoid contact with the nozzle of the upper stage engine. The interstage, which houses the J2-X engine for the Ares I vehicle, must be able to separate cleanly to avoid contact of the J2-X engine. There is only about approximately 18 inches of buffer inside the interstage on each size of the nozzle so this is a challenging controlled separation event. This complex experimental investigation required two separate Ares I models (upper stage and first stage with interstage attached) with independent strain gauge balances installed in each model. It also required the Captive Trajectory System (CTS) that was needed to precisely locate the components in space relative to each other to fill out the planned test matrix. The model setup in the AEDC VKF Tunnel A is shown in figure 13. The CTS remotely positioned the first stage at the required x, y, and z positions and was able to provide interactions within 0.2" of the upper stage. A sample of the axial force on the first stage booster is shown in figure 14. These results, as a function of separation distance between the two stages, are compared to pre-test CFD results. Since this is a very challenging, highly unsteady flow field for CFD to correctly model, the experimental results have been utilized by GN&C discipline to more accurately represent the interaction aerodynamics. In addition to the integrated forces and moments obtained from the test, flow visualization data was obtained from this test in the form of Schlieren photographs, as shown in figure 15, which show the shock structure and interaction effects after the two stages separate during flight. This separation test was crucial in the successful flight test of the Ares I-X vehicle and provided the GN&C discipline with the unpowered proximity aerodynamic effect for a separation of the Ares I vehicle.
NASA Astrophysics Data System (ADS)
Zdanowicz, E.; Guarino, V.; Konrad, C.; Williams, B.; Capatina, D.; D'Amico, K.; Arganbright, N.; Zimmerman, K.; Turneaure, S.; Gupta, Y. M.
2017-06-01
The Dynamic Compression Sector (DCS) at the Advanced Photon Source (APS), located at Argonne National Laboratory (ANL), has a diverse set of dynamic compression drivers to obtain time resolved x-ray data in single event, dynamic compression experiments. Because the APS x-ray beam direction is fixed, each driver at DCS must have the capability to move through a large range of linear and angular motions with high precision to accommodate a wide variety of scientific needs. Particularly challenging was the design and implementation of the motion control system for the two-stage light gas gun, which rests on a 26' long structure and weighs over 2 tons. The target must be precisely positioned in the x-ray beam while remaining perpendicular to the gun barrel axis to ensure one-dimensional loading of samples. To accommodate these requirements, the entire structure can pivot through 60° of angular motion and move 10's of inches along four independent linear directions with 0.01° and 10 μm resolution, respectively. This presentation will provide details of how this system was constructed, how it is controlled, and provide examples of the wide range of x-ray/sample geometries that can be accommodated. Work supported by DOE/NNSA.
Williams, Lauren Therese; Grealish, Laurie; Jamieson, Maggie
2015-01-01
Background Clinicians need to be supported by universities to use credible and defensible assessment practices during student placements. Web-based delivery of clinical education in student assessment offers professional development regardless of the geographical location of placement sites. Objective This paper explores the potential for a video-based constructivist Web-based program to support site supervisors in their assessments of student dietitians during clinical placements. Methods This project was undertaken as design-based research in two stages. Stage 1 describes the research consultation, development of the prototype, and formative feedback. In Stage 2, the program was pilot-tested and evaluated by a purposeful sample of nine clinical supervisors. Data generated as a result of user participation during the pilot test is reported. Users’ experiences with the program were also explored via interviews (six in a focus group and three individually). The interviews were transcribed verbatim and thematic analysis conducted from a pedagogical perspective using van Manen’s highlighting approach. Results This research succeeded in developing a Web-based program, “Feed our Future”, that increased supervisors’ confidence with their competency-based assessments of students on clinical placements. Three pedagogical themes emerged: constructivist design supports transformative Web-based learning; videos make abstract concepts tangible; and accessibility, usability, and pedagogy are interdependent. Conclusions Web-based programs, such as Feed our Future, offer a viable means for universities to support clinical supervisors in their assessment practices during clinical placements. A design-based research approach offers a practical process for such Web-based tool development, highlighting pedagogical barriers for planning purposes. PMID:25803172
Paing, J; Serdobbel, V; Welschbillig, M; Calvez, M; Gagnon, V; Chazarenc, F
2015-01-01
This study aimed at determining the treatment performances of a full-scale vertical flow constructed wetlands designed to treat wastewater from a food-processing industry (cookie factory), and to study the influence of the organic loading rate. The full-scale treatment plant was designed with a first vertical stage of 630 m², a second vertical stage of 473 m² equipped with a recirculation system and followed by a final horizontal stage of 440 m². The plant was commissioned in 2011, and was operated at different loading rates during 16 months for the purpose of this study. Treatment performances were determined by 24 hour composite samples. The mean concentration of the raw effluent was 8,548 mg.L(-1) chemical oxygen demand (COD), 4,334 mg.L(-1) biochemical oxygen demand (BOD5), and 2,069 mg.L(-1) suspended solids (SS). Despite low nutrients content with a BOD5/N/P ratio of 100/1.8/0.5, lower than optimum for biological degradation (known as 100/5/1), mean removal performances were very high with 98% for COD, 99% for BOD5 and SS for the two vertical stages. The increasing of the organic load from 50 g.m(-2).d(-1) COD to 237 g.m(-2).d(-1) COD (on the first stage) did not affect removal performances. The mean quality of effluent reached French standards (COD < 125 mg.L(-1), BOD5 < 25 mg.L(-1), SS < 35 mg.L(-1)).
Langergraber, Guenter; Pressl, Alexander; Haberl, Raimund
2014-01-01
This paper describes the results of the first full-scale implementation of a two-stage vertical flow constructed wetland (CW) system developed to increase nitrogen removal. The full-scale system was constructed for the Bärenkogelhaus, which is located in Styria at the top of a mountain, 1,168 m above sea level. The Bärenkogelhaus has a restaurant with 70 seats, 16 rooms for overnight guests and is a popular site for day visits, especially during weekends and public holidays. The CW treatment system was designed for a hydraulic load of 2,500 L.d(-1) with a specific surface area requirement of 2.7 m(2) per person equivalent (PE). It was built in fall 2009 and started operation in April 2010 when the restaurant was re-opened. Samples were taken between July 2010 and June 2013 and were analysed in the laboratory of the Institute of Sanitary Engineering at BOKU University using standard methods. During 2010 the restaurant at Bärenkogelhaus was open 5 days a week whereas from 2011 the Bärenkogelhaus was open only on demand for events. This resulted in decreased organic loads of the system in the later period. In general, the measured effluent concentrations were low and the removal efficiencies high. During the whole period the ammonia nitrogen effluent concentration was below 1 mg/L even at effluent water temperatures below 3 °C. Investigations during high-load periods, i.e. events like weddings and festivals at weekends, with more than 100 visitors, showed a very robust treatment performance of the two-stage CW system. Effluent concentrations of chemical oxygen demand and NH4-N were not affected by these events with high hydraulic loads.
NASA Technical Reports Server (NTRS)
He, Zhuohui J.
2017-01-01
Two P&W (Pratt & Whitney)'s axially staged sector combustors have been developed under NASA's Environmentally Responsible Aviation (ERA) project. One combustor was developed under ERA Phase I, and the other was developed under ERA Phase II. Nitrogen oxides (NOx) emissions characteristics and correlation equations for these two sector combustors are reported in this article. The Phase I design was to optimize the NOx emissions reduction potential, while the Phase II design was more practical and robust. Multiple injection points and fuel staging strategies are used in the combustor design. Pilot-stage injectors are located on the front dome plate of the combustor, and main-stage injectors are positioned on the top and bottom (Phase I) or on the top only (Phase II) of the combustor liners downstream. Low power configuration uses only pilot-stage injectors. Main-stage injectors are added to high power configuration to help distribute fuel more evenly and achieve lean burn throughout the combustor yielding very low NOx emissions. The ICAO (International Civil Aviation Organization) landing-takeoff NOx emissions are verified to be 88 percent (Phase I) and 76 percent (Phase II) under the ICAO CAEP/6 (Committee on Aviation Environmental Protection 6th Meeting) standard, exceeding the ERA project goal of 75 percent reduction, and the combustors proved to have stable combustion with room to maneuver on fuel flow splits for operability.
Aerodynamic and mechanical design of an 8:1 pressure ratio centrifugal compressor
NASA Technical Reports Server (NTRS)
Osborne, C.; Runstadler, P. W., Jr.; Stacy, W. D.
1974-01-01
A high-pressure-ratio, low-mass-flow centrifugal compressor stage was designed, fabricated, and tested. The design followed specifications that the stage be representative of state-of-the-art performance and that the stage is to be used as a workhorse compressor for planned experiments using laser Doppler velocimeter equipment. The final design is a 75,000-RPM, 19-blade impeller with an axial inducer and 30 degrees of backward leaning at the impeller tip. The compressor design was tested for two- and/or quasi-three-dimensional aerodynamic and stress characteristics. Critical speed analyses were performed for the high speed rotating impeller assembly. An optimally matched, 17-channel vane island diffuser was also designed and built.
Core compressor exit stage study. 1: Aerodynamic and mechanical design
NASA Technical Reports Server (NTRS)
Burdsall, E. A.; Canal, E., Jr.; Lyons, K. A.
1979-01-01
The effect of aspect ratio on the performance of core compressor exit stages was demonstrated using two three stage, highly loaded, core compressors. Aspect ratio was identified as having a strong influence on compressors endwall loss. Both compressors simulated the last three stages of an advanced eight stage core compressor and were designed with the same 0.915 hub/tip ratio, 4.30 kg/sec (9.47 1bm/sec) inlet corrected flow, and 167 m/sec (547 ft/sec) corrected mean wheel speed. The first compressor had an aspect ratio of 0.81 and an overall pressure ratio of 1.357 at a design adiabatic efficiency of 88.3% with an average diffusion factor or 0.529. The aspect ratio of the second compressor was 1.22 with an overall pressure ratio of 1.324 at a design adiabatic efficiency of 88.7% with an average diffusion factor of 0.491.
1960-01-01
This photograph shows the Saturn-I first stage (S-1 stage) being transported to the test stand for a static test firing at the Marshall Space Flight Center. Soon after NASA began operations in October 1958, it was evident that sending people and substantial equipment beyond the Earth's gravitational field would require launch vehicles with weight-lifting capabilities far beyond any developed to that time. In early 1959, NASA accepted the proposal of Dr. Wernher von Braun for a multistage rocket, with a number of engines clustered in one or more of the stages to provide a large total thrust. The initiation of the Saturn launch vehicle program ultimately led to the study and preliminary plarning of many different configurations and resulted in production of three Saturn launch vehicles, the Saturn-I, Saturn I-B, and Saturn V. The Saturn family of launch vehicles began with the Saturn-I, a two-stage vehicle originally designated C-1. The research and development program was planned in two phases, or blocks: one for first stage development (Block I) and the second for both first and second stage development (Block-II). Saturn I had a low-earth-orbit payload capability of approximately 25,000 pounds. The design of the first stage (S-1 stage) used a cluster of propellant tanks containing liquid oxygen (LOX) and kerosene (RP-1), and eight H-1 engines, yielding a total thrust of 1,500,000 pounds. Of the ten Saturn-Is planned, the first eight were designed and built at the Marshall Space Flight Center, and the remaining two were built by the Chrysler Corporation.
A Bayesian Framework for Reliability Analysis of Spacecraft Deployments
NASA Technical Reports Server (NTRS)
Evans, John W.; Gallo, Luis; Kaminsky, Mark
2012-01-01
Deployable subsystems are essential to mission success of most spacecraft. These subsystems enable critical functions including power, communications and thermal control. The loss of any of these functions will generally result in loss of the mission. These subsystems and their components often consist of unique designs and applications for which various standardized data sources are not applicable for estimating reliability and for assessing risks. In this study, a two stage sequential Bayesian framework for reliability estimation of spacecraft deployment was developed for this purpose. This process was then applied to the James Webb Space Telescope (JWST) Sunshield subsystem, a unique design intended for thermal control of the Optical Telescope Element. Initially, detailed studies of NASA deployment history, "heritage information", were conducted, extending over 45 years of spacecraft launches. This information was then coupled to a non-informative prior and a binomial likelihood function to create a posterior distribution for deployments of various subsystems uSing Monte Carlo Markov Chain sampling. Select distributions were then coupled to a subsequent analysis, using test data and anomaly occurrences on successive ground test deployments of scale model test articles of JWST hardware, to update the NASA heritage data. This allowed for a realistic prediction for the reliability of the complex Sunshield deployment, with credibility limits, within this two stage Bayesian framework.
Moustakas, Aristides; Evans, Matthew R
2015-02-28
Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.
Compressor Study to Meet Large Civil Tilt Rotor Engine Requirements
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
2009-01-01
A vehicle concept study has been made to meet the requirements of the Large Civil Tilt Rotorcraft vehicle mission. A vehicle concept was determined, and a notional turboshaft engine system study was conducted. The engine study defined requirements for the major engine components, including the compressor. The compressor design-point goal was to deliver a pressure ratio of 31:1 at an inlet weight flow of 28.4 lbm/sec. To perform a conceptual design of two potential compressor configurations to meet the design requirement, a mean-line compressor flow analysis and design code were used. The first configuration is an eight-stage axial compressor. Some challenges of the all-axial compressor are the small blade spans of the rear-block stages being 0.28 in., resulting in the last-stage blade tip clearance-to-span ratio of 2.4%. The second configuration is a seven-stage axial compressor, with a centrifugal stage having a 0.28-in. impeller-exit blade span. The compressors conceptual designs helped estimate the flow path dimensions, rotor leading and trailing edge blade angles, flow conditions, and velocity triangles for each stage.
Compressor Study to Meet Large Civil Tilt Rotor Engine Requirements
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
2009-01-01
A vehicle concept study has been made to meet the requirements of the Large Civil Tilt Rotorcraft vehicle mission. A vehicle concept was determined, and a notional turboshaft engine system study was conducted. The engine study defined requirements for the major engine components, including the compressor. The compressor design-point goal was to deliver a pressure ratio of 31:1 at an inlet weight flow of 28.4 lbm/sec. To perform a conceptual design of two potential compressor configurations to meet the design requirement, a mean-line compressor flow analysis and design code were used. The first configuration is an eight-stage axial compressor. Some challenges of the all-axial compressor are the small blade spans of the rear-block stages being 0.28 in., resulting in the last-stage blade tip clearance-to-span ratio of 2.4 percent. The second configuration is a seven-stage axial compressor, with a centrifugal stage having a 0.28-in. impeller-exit blade span. The compressors conceptual designs helped estimate the flow path dimensions, rotor leading and trailing edge blade angles, flow conditions, and velocity triangles for each stage.
Design, simulation and construction of the Taban tokamak
NASA Astrophysics Data System (ADS)
H, R. MIRZAEI; R, AMROLLAHI
2018-04-01
This paper describes the design and construction of the Taban tokamak, which is located in Amirkabir University of Technology, Tehran, Iran. The Taban tokamak was designed for plasma investigation. The design, simulation and construction of essential parts of the Taban tokamak such as the toroidal field (TF) system, ohmic heating (OH) system and equilibrium field system and their power supplies are presented. For the Taban tokamak, the toroidal magnetic coil was designed to produce a maximum field of 0.7 T at R = 0.45 m. The power supply of the TF was a 130 kJ, 0–10 kV capacitor bank. Ripples of toroidal magnetic field at the plasma edge and plasma center are 0.2% and 0.014%, respectively. For the OH system with 3 kA current, the stray field in the plasma region is less than 40 G over 80% of the plasma volume. The power supply of the OH system consists of two stages, as follows. The fast bank stage is a 120 μF, 0–5 kV capacitor that produces 2.5 kA in 400 μs and the slow bank stage is 93 mF, 600 V that can produce a maximum of 3 kA. The equilibrium system can produce uniform magnetic field at plasma volume. This system’s power supply, like the OH system, consists of two stages, so that the fast bank stage is 500 μF, 800 V and the slow bank stage is 110 mF, 200 V.
Performance of two-stage fan with larger dampers on first-stage rotor
NASA Technical Reports Server (NTRS)
Urasek, D. C.; Cunnan, W. S.; Stevans, W.
1979-01-01
The performance of a two stage, high pressure-ratio fan, having large, part-span vibration dampers on the first stage rotor is presented and compared with an identical aerodynamically designed fan having smaller dampers. Comparisons of the data for the two damper configurations show that with increased damper size: (1) very high losses in the damper region reduced overall efficiency of first stage rotor by approximately 3 points, (2) the overall performance of each blade row, downstream of the damper was not significantly altered, although appreciable differences in the radial distributions of various performance parameters were noted, and (3) the lower performance of the first stage rotor decreased the overall fan efficiency more than 1 percentage point.
USDA-ARS?s Scientific Manuscript database
Two sampling techniques, agar extraction (AE) and centrifuge sugar flotation extraction (CSFE) were compared to determine their relative efficacy to recover immature stages of Culicoides spp from salt marsh substrates. Three types of samples (seeded with known numbers of larvae, homogenized field s...
Aguirre, C; Olivares, N; Luppichini, P; Hinrichsen, P
2015-02-01
A PCR-based method was developed to identify Naupactus cervinus (Boheman) and Naupactus xanthographus (Germar), two curculionids affecting the citrus industry in Chile. The quarantine status of these two species depends on the country to which fruits are exported. This identification method was developed because it is not possible to discriminate between these two species at the egg stage. The method is based on the species-specific amplification of sequences of internal transcribed spacers, for which we cloned and sequenced these genome fragments from each species. We designed an identification system based on two duplex-PCR reactions. Each one contains the species-specific primer set and a second generic primer set that amplify a short 18S region common to coleopterans, to avoid false negatives. The marker system is able to differentiate each Naupactus species at any life stage, and with a diagnostic sensitivity to 0.045 ng of genomic DNA. This PCR kit was validated by samples collected from different citrus production areas throughout Chile and showed 100% accuracy in differentiating the two Naupactus species. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Zhang, L; Miyamachi, T; Tomanić, T; Dehm, R; Wulfhekel, W
2011-10-01
We designed a scanning tunneling microscope working at sub-Kelvin temperatures in ultrahigh vacuum (UHV) in order to study the magnetic properties on the nanoscale. An entirely homebuilt three-stage cryostat is used to cool down the microscope head. The first stage is cooled with liquid nitrogen, the second stage with liquid (4)He. The third stage uses a closed-cycle Joule-Thomson refrigerator of a cooling power of 1 mW. A base temperature of 930 mK at the microscope head was achieved using expansion of (4)He, which can be reduced to ≈400 mK when using (3)He. The cryostat has a low liquid helium consumption of only 38 ml/h and standing times of up to 280 h. The fast cooling down of the samples (3 h) guarantees high sample throughput. Test experiments with a superconducting tip show a high energy resolution of 0.3 meV when performing scanning tunneling spectroscopy. The vertical stability of the tunnel junction is well below 1 pm (peak to peak) and the electric noise floor of tunneling current is about 6fA/√Hz. Atomic resolution with a tunneling current of 1 pA and 1 mV was achieved on Au(111). The lateral drift of the microscope at stable temperature is below 20 pm/h. A superconducting spilt-coil magnet allows to apply an out-of-plane magnetic field of up to 3 T at the sample surface. The flux vortices of a Nb(110) sample were clearly resolved in a map of differential conductance at 1.1 K and a magnetic field of 0.21 T. The setup is designed for in situ preparation of tip and samples under UHV condition.
Experimental analysis of the flow in a two stage axial compressor at off-design conditions
NASA Astrophysics Data System (ADS)
Massardo, Aristide; Satta, Antonio
1987-05-01
The experimental analysis of the flow that develops in a two-stage axial flow compressor at off-design conditions is presented. The measurements are performed upstream, between, and downstream of the four blade rows of the compressor. The analysis shows the off-design effects on the local conditions of the flow field. Low-energy flow zones are identified, and the development of annulus-boundary-layer, secondary, and tip-clearance flows is shown. The tip-clearance flows are also present in the stator rows with various outlying conditions (stationary or rotating hub).
Murthy, B N; Subbiah, M; Boopathi, K; Ramakrishnan, R; Gupte, M D
2001-01-01
This paper examines whether the health administration can use lot quality assurance sampling (LQAS) for identifying high prevalence areas for leprosy for initiating necessary corrective measures. The null hypothesis was that leprosy prevalence in the district was at or above ten per 10,000 and the alternative hypothesis was that it was at or below five per 10,000. A total of 25,500 individuals were to be examined with 17 as an acceptable maximum number of cases (critical value). Two-stage cluster sample design was adopted. The sample size need not be escalated as the estimated design effect was 1. During the first phase, the survey covered a population of 4,837 individuals out of whom 4,329 (89.5%) were examined. Thirty-five cases were detected and this number far exceeded the critical value. It was concluded that leprosy prevalence in the district should be regarded as having prevalence of more than ten per 10,000 and further examination of the population in the sample was discontinued. LQAS may be used as a tool by which one can identify high prevalence districts and target them for necessary strengthening of the programme. It may also be considered for certifying elimination achievement for a given area.
Raub, Florian; Höfer, Hubert; Scheuermann, Ludger
2017-07-01
The data presented here have been collected in the southern part of the Atlantic Forest (Mata Atlântica) in the state of Paraná, Brazil within a bilateral scientific project (SOLOBIOMA). The project aimed to assess the quality of secondary forests of different regeneration stages in comparison with old-growth forests with regard to diversity of soil animals and related functions. The Atlantic Forest is a hotspot of biological diversity with an exceptionally high degree of endemic species, extending over a range of 3,500 km along the coast of Brazil. The anthropogenic pressure in the region is very high with three of the biggest cities of Brazil (São Paulo, Rio de Janeiro, and Curitiba) lying in its extension. An evaluation of the value of secondary forests for biodiversity conservation is becoming more and more important due to the complete disappearance of primary forests. In 2005, we sampled spiders in 12 sites of three successional stages (5-8, 10-15, 35-50 yr old, three replicates of each forest stage) and old-growth forests (> 100 yr untouched, also three replicates). All sites were inside a private nature reserve (Rio Cachoeira Nature Reserve). We repeated the sampling design and procedure in 2007 in a second private reserve (Itaqui Nature Reserve). The two nature reserves are within about 25 km of each other within a well preserved region of the Mata Atlântica, where the matrix of the landscape mosaic is still forest. A widely accepted standard protocol was used in a replicated sampling design to apply statistical analyses to the resulting data set and allow for comparison with other studies in Brazil. Spiders were sorted to family level and counted; the adult spiders further identified to species if possible or classified as morphospecies with the help of several spider specialists. © 2017 by the Ecological Society of America.
Empowering Education: A New Model for In-service Training of Nursing Staff
CHAGHARI, MAHMUD; SAFFARI, MOHSEN; EBADI, ABBAS; AMERYOUN, AHMAD
2017-01-01
Introduction: In-service training of nurses plays an indispensable role in improving the quality of inpatient care. Need to enhance the effectiveness of in-service training of nurses is an inevitable requirement. This study attempted to design a new optimal model for in-service training of nurses. Methods: This qualitative study was conducted in two stages during 2015-2016. In the first stage, the Grounded Theory was adopted to explore the process of training 35 participating nurses. The sampling was initially purposeful and then theoretically based on emerging concept. Data were collected through interview, observation and field notes. Moreover, the data were analyzed through Corbin-Strauss method and the data were coded through MAXQDA-10. In the second stage, the findings were employed through ’Walker and Avants strategy for theory construction so as to design an optimal model for in-service training of nursing staff. Results: In the first stage, there were five major themes including unsuccessful mandatory education, empowering education, organizational challenges of education, poor educational management, and educational-occupational resiliency. Empowering education was the core variable derived from the research, based on which a grounded theory was proposed. The new empowering education model was composed of self-directed learning and practical learning. There are several strategies to achieve empowering education, including the fostering of searching skills, clinical performance monitoring, motivational factors, participation in the design and implementation, and problem-solving approach. Conclusion: Empowering education is a new model for in-service training of nurses, which matches the training programs with andragogical needs and desirability of learning among the staff. Owing to its practical nature, the empowering education can facilitate occupational tasks and achieving greater mastery of professional skills among the nurses. PMID:28180130
Empowering Education: A New Model for In-service Training of Nursing Staff.
Chaghari, Mahmud; Saffari, Mohsen; Ebadi, Abbas; Ameryoun, Ahmad
2017-01-01
In-service training of nurses plays an indispensable role in improving the quality of inpatient care. Need to enhance the effectiveness of in-service training of nurses is an inevitable requirement. This study attempted to design a new optimal model for in-service training of nurses. This qualitative study was conducted in two stages during 2015-2016. In the first stage, the Grounded Theory was adopted to explore the process of training 35 participating nurses. The sampling was initially purposeful and then theoretically based on emerging concept. Data were collected through interview, observation and field notes. Moreover, the data were analyzed through Corbin-Strauss method and the data were coded through MAXQDA-10. In the second stage, the findings were employed through 'Walker and Avants strategy for theory construction so as to design an optimal model for in-service training of nursing staff. In the first stage, there were five major themes including unsuccessful mandatory education, empowering education, organizational challenges of education, poor educational management, and educational-occupational resiliency. Empowering education was the core variable derived from the research, based on which a grounded theory was proposed. The new empowering education model was composed of self-directed learning and practical learning. There are several strategies to achieve empowering education, including the fostering of searching skills, clinical performance monitoring, motivational factors, participation in the design and implementation, and problem-solving approach. Empowering education is a new model for in-service training of nurses, which matches the training programs with andragogical needs and desirability of learning among the staff. Owing to its practical nature, the empowering education can facilitate occupational tasks and achieving greater mastery of professional skills among the nurses.
NASA Astrophysics Data System (ADS)
Wikus, P.; Doriese, W. B.; Eckart, M. E.; Adams, J. S.; Bandler, S. R.; Brekosky, R. P.; Chervenak, J. A.; Ewin, A. J.; Figueroa-Feliciano, E.; Finkbeiner, F. M.; Galeazzi, M.; Hilton, G.; Irwin, K. D.; Kelley, R. L.; Kilbourne, C. A.; Leman, S. W.; McCammon, D.; Porter, F. S.; Reintsema, C. D.; Rutherford, J. M.; Trowbridge, S. N.
2009-12-01
The Micro-X sounding rocket experiment will deploy an imaging transition-edge-sensor (TES) microcalorimeter spectrometer to observe astrophysical sources in the 0.2-3.0 keV band. The instrument has been designed at a systems level, and the first items of flight hardware are presently being built. In the first flight, planned for January 2011, the spectrometer will observe a recently discovered Silicon knot in the Puppis-A supernova remnant. Here we describe the design of the Micro-X science instrument, focusing on the instrument's detector and detector assembly. The current design of the 2-dimensional spectrometer array contains 128 close-packed pixels with a pitch of 600 μm. The conically approximated Wolter-1 mirror will map each of these pixels to a 0.95 arcmin region on the sky; the field of view will be 11.4 arcmin. Targeted energy resolution of the TESs is about 2 eV over the full observing band. A SQUID time-division multiplexer (TDM) will read out the array. The detector time constants will be engineered to approximately 2 ms to match the TDM, which samples each pixel at 32.6 kHz, limited only by the telemetry system of the rocket. The detector array and two SQUID stages of the TDM readout system are accommodated in a lightweight Mg enclosure, which is mounted to the 50 mK stage of an adiabatic demagnetization refrigerator. A third SQUID amplification stage is located on the 1.6 K liquid He stage of the cryostat. An on-board 55-Fe source will fluoresce a Ca target, providing 3.69 and 4.01 keV calibration lines that will not interfere with the scientifically interesting energy band.
[National Health and Nutrition Survey 2012: design and coverage].
Romero-Martínez, Martín; Shamah-Levy, Teresa; Franco-Núñez, Aurora; Villalpando, Salvador; Cuevas-Nasu, Lucía; Gutiérrez, Juan Pablo; Rivera-Dommarco, Juan Ángel
2013-01-01
To describe the design and population coverage of the National Health and Nutrition Survey 2012 (NHNS 2012). The design of the NHNS 2012 is reported, as a probabilistic population based survey with a multi-stage and stratified sampling, as well as the sample inferential properties, the logistical procedures, and the obtained coverage. Household response rate for the NHNS 2012 was 87%, completing data from 50,528 households, where 96 031 individual interviews selected by age and 14,104 of ambulatory health services users were also obtained. The probabilistic design of the NHNS 2012 as well as its coverage allowed to generate inferences about health and nutrition conditions, health programs coverage, and access to health services. Because of their complex designs, all estimations from the NHNS 2012 must use the survey design: weights, primary sampling units, and stratus variables.
A new method to obtain Fourier transform infrared spectra free from water vapor disturbance.
Chen, Yujing; Wang, Hai-Shui; Umemura, Junzo
2010-10-01
Infrared absorption bands due to water vapor in the mid-infrared regions often obscure important spectral features of the sample. Here, we provide a novel method to collect a qualified infrared spectrum without any water vapor interference. The scanning procedure for a single-beam spectrum of the sample is divided into two stages under an atmosphere with fluctuating humidity. In the first stage, the sample spectrum is measured with approximately the same number of scans as the background. If the absorbance of water vapor in the spectrum is positive (or negative) at the end of the first stage, then the relative humidity in the sample compartment of the spectrometer is changed by a dry (or wet) air blow at the start of the second stage while the measurement of the sample spectrum continues. After the relative humidity changes to a lower (or higher) level than that of the previously collected background spectrum, water vapor peaks will become smaller and smaller with the increase in scanning number during the second stage. When the interfering water lines disappear from the spectrum, the acquisition of a sample spectrum is terminated. In this way, water vapor interference can finally be removed completely.
NASA Astrophysics Data System (ADS)
Galerkin, Y. B.; Voinov, I. B.; Drozdov, A. A.
2017-08-01
Computational Fluid Dynamics (CFD) methods are widely used for centrifugal compressors design and flow analysis. The calculation results are dependent on the chosen software, turbulence models and solver settings. Two of the most widely applicable programs are NUMECA Fine Turbo and ANSYS CFX. The objects of the study were two different stages. CFD-calculations were made for a single blade channel and for full 360-degree flow paths. Stage 1 with 3D impeller and vaneless diffuser was tested experimentally. Its flow coefficient is 0.08 and loading factor is 0.74. For stage 1 calculations were performed with different grid quality, a different number of cells and different models of turbulence. The best results have demonstrated the Spalart-Allmaras model and mesh with 1.854 million cells. Stage 2 with return channel, vaneless diffuser and 3D impeller with flow coefficient 0.15 and loading factor 0.5 was designed by the known Universal Modeling Method. Its performances were calculated by the well identified Math model. Stage 2 performances by CFD calculations shift to higher flow rate in comparison with design performances. The same result was obtained for stage 1 in comparison with measured performances. Calculated loading factor is higher in both cases for a single blade channel. Loading factor performance calculated for full flow path (“360 degrees”) by ANSYS CFX is in satisfactory agreement with the stage 2 design performance. Maximum efficiency is predicted accurately by the ANSYS CFX “360 degrees” calculation. “Sector” calculation is less accurate. Further research is needed to solve the problem of performances mismatch.
Oxidation kinetics for conversion of U 3O 8 to ε-UO 3 with NO 2
Johnson, J. A.; Rawn, C. J.; Spencer, B. B.; ...
2017-04-04
The oxidation kinetics of U 3O 8 powder to ε-UO 3 in an NO 2 environment was measured by in situ x-ray diffraction (XRD). Experiments were performed at temperatures of 195, 210, 235, and 250°C using a custom designed and fabricated sample isolation stage. Data were refined to quantify phase fractions using a newly proposed structure for the ε-UO 3 polymorph. The kinetics data were modeled using a shrinking core approach. A proposed two-step reaction process is presented based on the developed models.
Hypersonic airbreathing vehicle visions and enhancing technologies
NASA Astrophysics Data System (ADS)
Hunt, James L.; Lockwood, Mary Kae; Petley, Dennis H.; Pegg, Robert J.
1997-01-01
This paper addresses the visions for hypersonic airbreathing vehicles and the advanced technologies that forge and enhance the designs. The matrix includes space access vehicles (single-stage-to-orbit (SSTO), two-stage-to-orbit (2STO) and three-stage-to-orbit (3STO)) and endoatmospheric vehicles (airplanes—missiles are omitted). The characteristics, the performance potential, the technologies and the synergies will be discussed. A common design constraint is that all vehicles (space access and endoatmospheric) have enclosed payload bays.
Prednisolone and acupuncture in Bell's palsy: study protocol for a randomized, controlled trial
2011-01-01
Background There are a variety of treatment options for Bell's palsy. Evidence from randomized controlled trials indicates corticosteroids can be used as a proven therapy for Bell's palsy. Acupuncture is one of the most commonly used methods to treat Bell's palsy in China. Recent studies suggest that staging treatment is more suitable for Bell's palsy, according to different path-stages of this disease. The aim of this study is to compare the effects of prednisolone and staging acupuncture in the recovery of the affected facial nerve, and to verify whether prednisolone in combination with staging acupuncture is more effective than prednisolone alone for Bell's palsy in a large number of patients. Methods/Design In this article, we report the design and protocol of a large sample multi-center randomized controlled trial to treat Bell's palsy with prednisolone and/or acupuncture. In total, 1200 patients aged 18 to 75 years within 72 h of onset of acute, unilateral, peripheral facial palsy will be assessed. There are six treatment groups, with four treated according to different path-stages and two not. These patients are randomly assigned to be in one of the following six treatment groups, i.e. 1) placebo prednisolone group, 2) prednisolone group, 3) placebo prednisolone plus acute stage acupuncture group, 4) prednisolone plus acute stage acupuncture group, 5) placebo prednisolone plus resting stage acupuncture group, 6) prednisolone plus resting stage acupuncture group. The primary outcome is the time to complete recovery of facial function, assessed by Sunnybrook system and House-Brackmann scale. The secondary outcomes include the incidence of ipsilateral pain in the early stage of palsy (and the duration of this pain), the proportion of patients with severe pain, the occurrence of synkinesis, facial spasm or contracture, and the severity of residual facial symptoms during the study period. Discussion The result of this trial will assess the efficacy of using prednisolone and staging acupuncture to treat Bell's palsy, and to determine a best combination therapy with prednisolone and acupuncture for treating Bell's palsy. Trial Registration ClinicalTrials.gov: NCT01201642 PMID:21693007
Sampling and estimating recreational use.
Timothy G. Gregoire; Gregory J. Buhyoff
1999-01-01
Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.
Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S
2014-09-01
Many papers have introduced adaptive clinical trial methods that allow modifications to the sample size based on interim estimates of treatment effect. There has been extensive commentary on type I error control and efficiency considerations, but little research on estimation after an adaptive hypothesis test. We evaluate the reliability and precision of different inferential procedures in the presence of an adaptive design with pre-specified rules for modifying the sampling plan. We extend group sequential orderings of the outcome space based on the stage at stopping, likelihood ratio statistic, and sample mean to the adaptive setting in order to compute median-unbiased point estimates, exact confidence intervals, and P-values uniformly distributed under the null hypothesis. The likelihood ratio ordering is found to average shorter confidence intervals and produce higher probabilities of P-values below important thresholds than alternative approaches. The bias adjusted mean demonstrates the lowest mean squared error among candidate point estimates. A conditional error-based approach in the literature has the benefit of being the only method that accommodates unplanned adaptations. We compare the performance of this and other methods in order to quantify the cost of failing to plan ahead in settings where adaptations could realistically be pre-specified at the design stage. We find the cost to be meaningful for all designs and treatment effects considered, and to be substantial for designs frequently proposed in the literature. © 2014, The International Biometric Society.
Design of a 3-Stage ADR for the Soft X-Ray Spectrometer Instrument on the Astro-H Mission
NASA Technical Reports Server (NTRS)
Shirron, Peter J.; Kimball, Mark O.; Wegel, Donald C.; Canavan, Edgar R.; DiPirro, Michael J.
2011-01-01
The Japanese Astro-H mission will include the Soft X-ray Spectrometer (SXS) instrument, whose 36-pixel detector array of ultra-sensitive x-ray microcalorimeters requires cooling to 50 mK. This will be accomplished using a 3-stage adiabatic demagnetization refrigerator (ADR). The design is dictated by the need to operate with full redundancy with both a superfluid helium dewar at 1.3 K or below, and with a 4.5 K Joule-Thomson (JT) cooler. The ADR is configured as a 2-stage unit that is located in a well in the helium tank, and a third stage that is mounted to the top of the helium tank. The third stage is directly connected through two heat switches to the JT cooler and the helium tank, and manages heat flow between the two. When liquid helium is present, the 2-stage ADR operates in a single-shot manner using the superfluid helium as a heat sink. The third stage may be used independently to reduce the time-average heat load on the liquid to extend its lifetime. When the liquid is depleted, the 2nd and 3rd stages operate as a continuous ADR to maintain the helium tank at as low a temperature as possible - expected to be 1.2 K - and the 1st stage cools from that temperature as a single-stage, single-shot ADR. The ADR s design and operating modes are discussed, along with test results of the prototype 3-stage ADR.
Camera Layout Design for the Upper Stage Thrust Cone
NASA Technical Reports Server (NTRS)
Wooten, Tevin; Fowler, Bart
2010-01-01
Engineers in the Integrated Design and Analysis Division (EV30) use a variety of different tools to aid in the design and analysis of the Ares I vehicle. One primary tool in use is Pro-Engineer. Pro-Engineer is a computer-aided design (CAD) software that allows designers to create computer generated structural models of vehicle structures. For the Upper State thrust cone, Pro-Engineer was used to assist in the design of a layout for two camera housings. These cameras observe the separation between the first and second stage of the Ares I vehicle. For the Ares I-X, one standard speed camera was used. The Ares I design calls for two separate housings, three cameras, and a lighting system. With previous design concepts and verification strategies in mind, a new layout for the two camera design concept was developed with members of the EV32 team. With the new design, Pro-Engineer was used to draw the layout to observe how the two camera housings fit with the thrust cone assembly. Future analysis of the camera housing design will verify the stability and clearance of the camera with other hardware present on the thrust cone.
Development of Explosive Ripper with Two-Stage Combustion
1974-10-01
inch pipe duct work, the width of this duct proved to be detrimental in marginally rippable material; the duct, instead of the penetrator tip, was...marginally rippable rock. ID. Operating Requirements 2. Fuel The two-stage combustion device is designed to operate using S A 42. the same diesel
Endwall flows and blading design for axial flow compressors
NASA Astrophysics Data System (ADS)
Robinson, Christopher J.
Literature relevant to blading design in the endwall region is reviewed, and important three dimensional flow phenomena occurring in embedded stages of axial compressors are described. A low speed axial flow four stage compressor rig is described and bladings studied are detailed: two conventional and two with end bends. The application of a three dimensional Navier-Stokes solver to the bladings' stators, to assess the effectiveness of the code, is reported. Calculation results of exit whirl angles, losses, and surface static pressures are compared with experiment.
A 1.8K refrigeration cryostat with 100 hours continuous cooling
NASA Astrophysics Data System (ADS)
Xu, Dong; Li, Jian; Huang, Rongjin; Li, Laifeng
2017-02-01
A refrigeration cryostat has been developed to produce continuous cooling to a sample below 1.8 K over 100 hours by using a cryocooler. A two-stage 4K G-M cryocooler is used to liquefy helium gas from evacuated vapor and cylinder helium bottle which can be replaced during the cooling process. The liquid helium transfer into superfluid helium in a Joule-Thomson valve in connection with a 1000 m3/h pumping unit. The pressure of evacuated helium vapor is controlled by air bag and valves. A copper decompression chamber, which is designed as a cooling station to control the superfluid helium, is used to cool the sample attached on it uniformly. The sample connects to the copper chamber in cryostat with screw thread. The cryostat can reach the temperature of 1.7 K without load and the continuous working time is more than 100 hours.
Sample of CFD optimization of a centrifugal compressor stage
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
Industrial centrifugal compressor stage is a complicated object for gas dynamic design when the goal is to achieve maximum efficiency. The Authors analyzed results of CFD performance modeling (NUMECA Fine Turbo calculations). Performance prediction in a whole was modest or poor in all known cases. Maximum efficiency prediction was quite satisfactory to the contrary. Flow structure in stator elements was in a good agreement with known data. The intermediate type stage “3D impeller + vaneless diffuser+ return channel” was designed with principles well proven for stages with 2D impellers. CFD calculations of vaneless diffuser candidates demonstrated flow separation in VLD with constant width. The candidate with symmetrically tampered inlet part b3 / b2 = 0,73 appeared to be the best. Flow separation takes place in the crossover with standard configuration. The alternative variant was developed and numerically tested. The obtained experience was formulated as corrected design recommendations. Several candidates of the impeller were compared by maximum efficiency of the stage. The variant with gas dynamic standard principles of blade cascade design appeared to be the best. Quasi - 3D non-viscid calculations were applied to optimize blade velocity diagrams - non-incidence inlet, control of the diffusion factor and of average blade load. “Geometric” principle of blade formation with linear change of blade angles along its length appeared to be less effective. Candidates’ with different geometry parameters were designed by 6th math model version and compared. The candidate with optimal parameters - number of blades, inlet diameter and leading edge meridian position - is 1% more effective than the stage of the initial design.
Perinetti, G; Baccetti, T; Contardo, L; Di Lenarda, R
2011-02-01
To evaluate the gingival crevicular fluid (GCF) alkaline phosphatase (ALP) activity in growing subjects in relation to the stages of individual skeletal maturation. The Department of Biomedicine, University of Trieste. Seventy-two healthy growing subjects (45 women and 27 men; range, 7.8-17.7 years). Double-blind, prospective, cross-sectional design. Samples of GCF were collected from each subject at the mesial and distal sites of both of the central incisors, in the maxilla and mandible. Skeletal maturation phase was assessed through the cervical vertebral maturation (CVM) method. Enzymatic activity was determined spectrophotometrically. The relationship between GCF ALP activity and CVM stages was significant. In particular, a twofold peak in enzyme activity was seen at the CS3 and CS4 pubertal stages, compared to the pre-pubertal stages (CS1 and CS2) and post-pubertal stages (CS5 and CS6), at both the maxillary and mandibular sites. No differences were seen between the maxillary and mandibular sites, or between the sexes. As an adjunct to standard methods based upon radiographic parameters, the GCF ALP may be a candidate as a non-invasive clinical biomarker for the identification of the pubertal growth spurt in periodontally healthy subjects scheduled for orthodontic treatment. © 2010 John Wiley & Sons A/S.
The reliability and validity of the Chinese version of nurses' self-concept questionnaire.
Cao, Xiao Yi; Liu, Xiao Hong; Tian, Lang; Guo, Yan Qin
2013-05-01
To examine the reliability and validity of the Chinese version of nurses' self-concept questionnaire. Nurses' self-concept is important to alleviate the current shortage of nurses. Nurses' self-concept questionnaire is an effective instrument to measure nurses' self-perception of professional competencies. However, the psychometric properties of the Chinese version have not been tested. A two-stage research design was used in this study. At Stage 1347 registered nurses were recruited to establish the psychometric properties of the Chinese version. At Stage 2, a confirmatory factor analysis was used to examine the extracted factor structure from Stage 1 with 1017 respondents as a sample. The internal consistency of the Chinese version was 0.95 and the test-retest reliability was 0.83. The exploratory factor analysis extracted six dimensions. The findings at Stage 2 showed an acceptable model fit and discriminant validity. The Chinese version was a significant predictor of Maslach Burnout Inventory (β = -0.58; P = 0.00). This study verified the psychometric properties of the Chinese version of nurses' self-concept questionnaire. The Chinese version of nurses' self-concept questionnaire will facilitate the evaluation of professional self-concept among nurses and help to develop the individualized self-concept strategies. © 2012 Blackwell Publishing Ltd.
Reliability and validity of the Modified Erikson Psychosocial Stage Inventory in diverse samples.
Leidy, N K; Darling-Fisher, C S
1995-04-01
The Modified Erikson Psychosocial Stage Inventory (MEPSI) is a relatively simple survey measure designed to assess the strength of psychosocial attributes that arise from progression through Erikson's eight stages of development. The purpose of this study was to employ secondary analysis to evaluate the internal-consistency reliability and construct validity of the MEPSI across four diverse samples: healthy young adults, hemophilic men, healthy older adults, and older adults with chronic obstructive pulmonary disease. Special attention was given to the performance of the measure across gender, with exploratory analyses examining possible age cohort and health status effects. Internal-consistency estimates for the aggregate measure were high, whereas subscale reliability levels varied across age groups. Construct validity was supported across samples. Gender, cohort, and health effects offered interesting psychometric and theoretical insights and direction for further research. Findings indicated that the MEPSI might be a useful instrument for operationalizing and testing Eriksonian developmental theory in adults.
The design of two stage to orbit vehicles
NASA Astrophysics Data System (ADS)
Gregorek, G. M.; Ramsay, T. N.
1991-09-01
Two designs are presented for a two-stage-to-orbit vehicle to complement an existing heavy lift vehicle. The payload is 10,000 lbs and 27 ft long by 10 ft in diameter for design purposes and must be carried to a low earth orbit by an air-breathing carrier configuration that can take off horizontally within 15,000 ft. Two designs are presented: a delta wing/body carrier in which the fuselage contains the orbiter; and a cranked-delta wing/body carrier in which the orbiter is carried piggy back. The engines for both carriers are turbofanramjets powered with liquid hydrogen, and the orbiters employ either a Space Shuttle Main Engine or a half-scale version with additional scramjet engines. The orbiter based on a full-scale Space Shuttle Main Engine is found to have a significantly higher takeoff weight which results in a higher total takeoff weight.
Survey methods for assessing land cover map accuracy
Nusser, S.M.; Klaas, E.E.
2003-01-01
The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.
NASA Technical Reports Server (NTRS)
Johnson, J. R. (Principal Investigator)
1974-01-01
The author has identified the following significant results. The broad scale vegetation classification was developed for a 3,200 sq mile area in southeastern Arizona. The 31 vegetation types were derived from association tables which contained information taken at about 500 ground sites. The classification provided an information base that was suitable for use with small scale photography. A procedure was developed and tested for objectively comparing photo images. The procedure consisted of two parts, image groupability testing and image complexity testing. The Apollo and ERTS photos were compared for relative suitability as first stage stratification bases in two stage proportional probability sampling. High altitude photography was used in common at the second stage.
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-02-01
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.
Kitchen, Robert R; Sabine, Vicky S; Sims, Andrew H; Macaskill, E Jane; Renshaw, Lorna; Thomas, Jeremy S; van Hemert, Jano I; Dixon, J Michael; Bartlett, John M S
2010-02-24
Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data.
2010-01-01
Background Microarray technology is a popular means of producing whole genome transcriptional profiles, however high cost and scarcity of mRNA has led many studies to be conducted based on the analysis of single samples. We exploit the design of the Illumina platform, specifically multiple arrays on each chip, to evaluate intra-experiment technical variation using repeated hybridisations of universal human reference RNA (UHRR) and duplicate hybridisations of primary breast tumour samples from a clinical study. Results A clear batch-specific bias was detected in the measured expressions of both the UHRR and clinical samples. This bias was found to persist following standard microarray normalisation techniques. However, when mean-centering or empirical Bayes batch-correction methods (ComBat) were applied to the data, inter-batch variation in the UHRR and clinical samples were greatly reduced. Correlation between replicate UHRR samples improved by two orders of magnitude following batch-correction using ComBat (ranging from 0.9833-0.9991 to 0.9997-0.9999) and increased the consistency of the gene-lists from the duplicate clinical samples, from 11.6% in quantile normalised data to 66.4% in batch-corrected data. The use of UHRR as an inter-batch calibrator provided a small additional benefit when used in conjunction with ComBat, further increasing the agreement between the two gene-lists, up to 74.1%. Conclusion In the interests of practicalities and cost, these results suggest that single samples can generate reliable data, but only after careful compensation for technical bias in the experiment. We recommend that investigators appreciate the propensity for such variation in the design stages of a microarray experiment and that the use of suitable correction methods become routine during the statistical analysis of the data. PMID:20181233
Spiegelman, Donna; Rivera-Rodriguez, Claudia L; Haneuse, Sebastien
2016-07-01
In public health evaluations, confounding bias in the estimate of the intervention effect will typically threaten the validity of the findings. It is a common misperception that the only way to avoid this bias is to measure detailed, high-quality data on potential confounders for every intervention participant, but this strategy for adjusting for confounding bias is often infeasible. Rather than ignoring confounding altogether, the two-phase design and analysis-in which detailed high-quality confounding data are obtained among a small subsample-can be considered. We describe the two-stage design and analysis approach, and illustrate its use in the evaluation of an intervention conducted in Dar es Salaam, Tanzania, of an enhanced community health worker program to improve antenatal care uptake.
Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach
NASA Astrophysics Data System (ADS)
Tsai, Bi-Huei; Chang, Chih-Huei
2009-08-01
Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.
Thermal design and performance of the REgolith x-ray imaging spectrometer (REXIS) instrument
NASA Astrophysics Data System (ADS)
Stout, Kevin D.; Masterson, Rebecca A.
2014-08-01
The REgolith X-ray Imaging Spectrometer (REXIS) instrument is a student collaboration instrument on the OSIRIS-REx asteroid sample return mission scheduled for launch in September 2016. The REXIS science mission is to characterize the elemental abundances of the asteroid Bennu on a global scale and to search for regions of enhanced elemental abundance. The thermal design of the REXIS instrument is challenging due to both the science requirements and the thermal environment in which it will operate. The REXIS instrument consists of two assemblies: the spectrometer and the solar X-ray monitor (SXM). The spectrometer houses a 2x2 array of back illuminated CCDs that are protected from the radiation environment by a one-time deployable cover and a collimator assembly with coded aperture mask. Cooling the CCDs during operation is the driving thermal design challenge on the spectrometer. The CCDs operate in the vicinity of the electronics box, but a 130 °C thermal gradient is required between the two components to cool the CCDs to -60 °C in order to reduce noise and obtain science data. This large thermal gradient is achieved passively through the use of a copper thermal strap, a large radiator facing deep space, and a two-stage thermal isolation layer between the electronics box and the DAM. The SXM is mechanically mounted to the sun-facing side of the spacecraft separately from the spectrometer and characterizes the highly variable solar X-ray spectrum to properly interpret the data from the asteroid. The driving thermal design challenge on the SXM is cooling the silicon drift detector (SDD) to below -30 °C when operating. A two-stage thermoelectric cooler (TEC) is located directly beneath the detector to provide active cooling, and spacecraft MLI blankets cover all of the SXM except the detector aperture to radiatively decouple the SXM from the flight thermal environment. This paper describes the REXIS thermal system requirements, thermal design, and analyses, with a focus on the driving thermal design challenges for the instrument. It is shown through both analysis and early testing that the REXIS instrument can perform successfully through all phases of its mission.
Miri, Mehdi; Khavasi, Amin; Mehrany, Khashayar; Rashidian, Bizhan
2010-01-15
The transmission-line analogy of the planar electromagnetic reflection problem is exploited to obtain a transmission-line model that can be used to design effective, robust, and wideband interference-based matching stages. The proposed model based on a new definition for a scalar impedance is obtained by using the reflection coefficient of the zeroth-order diffracted plane wave outside the photonic crystal. It is shown to be accurate for in-band applications, where the normalized frequency is low enough to ensure that the zeroth-order diffracted plane wave is the most important factor in determining the overall reflection. The frequency limitation of employing the proposed approach is explored, highly dispersive photonic crystals are considered, and wideband matching stages based on binomial impedance transformers are designed to work at the first two photonic bands.
Corrosion in Magnesium and a Magnesium Alloy
NASA Astrophysics Data System (ADS)
Akavipat, Sanay
Magnesium and a magnesium alloy (AZ91C) have been ion implanted over a range of ions energies (50 to 150 keV) and doses (1 x 10('16) to 2 x 10('17) ions/cm('2)) to modify the corrosion properties of the metals. The corrosion tests were done by anodic polarization in chloride -free and chloride-containing aqueous solutions of a borated -boric acid with a pH of 9.3. Anodic polarization measurements showed that some implantations could greatly reduce the corrosion current densities at all impressed voltages and also increased slightly the pitting potential, which indicated the onset of the chloride attack. These improvements in corrosion resistance were caused by boron implantations into both types of samples. However, iron implantations were found to improve only the magnesium alloy. To study the corrosion in more detail, Scanning Auger Microprobe Spectrometer (SAM), Scanning Electron Microscope (SEM) with an X-ray Energy Spectrometry (XES) attachment, and Transmission Electron Microscope (TEM) measurements were used to analyze samples before, after, and at various corrosion stages. In both the unimplanted pure magnesium and AZ91C samples, anodic polarization results revealed that there were three active corrosion stages (Stages A, C, and E) and two passivating stages (Stages B and D). Examination of Stages A and B in both types of samples showed that only a mild, generalized corrosion had occurred. In Stage C of the TD samples, a pitting breakdown in the initial oxide film was observed. In Stage C of the AZ91C samples, galvanic and intergranular attack around the Mg(,17)Al(,12) intermetallic islands and along the matrix grain boundaries was observed. Stage D of both samples showed the formation of a thick, passivating oxygen containing, probably Mg(OH)(,2) film. In Stage E, this film was broken down by pits, which formed due to the presence of the chloride ions in both types of samples. Stages A through D of the unimplanted samples were not seen in the boron or iron implanted samples. Instead one low current density passivating stage was formed, which was ultimately broken down by the chloride attack. It is believed that the implantation of boron modified the initial surface film to inhibit corrosion, whereas the iron implantation modified the intermetallic (Mg(,17)Al(,12)) islands to act as sacrificial anodes.
Correcting for bias in the selection and validation of informative diagnostic tests.
Robertson, David S; Prevost, A Toby; Bowden, Jack
2015-04-15
When developing a new diagnostic test for a disease, there are often multiple candidate classifiers to choose from, and it is unclear if any will offer an improvement in performance compared with current technology. A two-stage design can be used to select a promising classifier (if one exists) in stage one for definitive validation in stage two. However, estimating the true properties of the chosen classifier is complicated by the first stage selection rules. In particular, the usual maximum likelihood estimator (MLE) that combines data from both stages will be biased high. Consequently, confidence intervals and p-values flowing from the MLE will also be incorrect. Building on the results of Pepe et al. (SIM 28:762-779), we derive the most efficient conditionally unbiased estimator and exact confidence intervals for a classifier's sensitivity in a two-stage design with arbitrary selection rules; the condition being that the trial proceeds to the validation stage. We apply our estimation strategy to data from a recent family history screening tool validation study by Walter et al. (BJGP 63:393-400) and are able to identify and successfully adjust for bias in the tool's estimated sensitivity to detect those at higher risk of breast cancer. © 2015 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Photovoltaic Enhancement with Ferroelectric HfO2Embedded in the Structure of Solar Cells
NASA Astrophysics Data System (ADS)
Eskandari, Rahmatollah; Malkinski, Leszek
Enhancing total efficiency of the solar cells is focused on the improving one or all of the three main stages of the photovoltaic effect: absorption of the light, generation of the carriers and finally separation of the carriers. Ferroelectric photovoltaic designs target the last stage with large electric forces from polarized ferroelectric films that can be larger than band gap of the material and the built-in electric fields in semiconductor bipolar junctions. In this project we have fabricated very thin ferroelectric HfO2 films ( 10nm) doped with silicon using RF sputtering method. Doped HfO2 films were capped between two TiN layers ( 20nm) and annealed at temperatures of 800ºC and 1000ºC and Si content was varied between 6-10 mol. % using different size of mounted Si chip on hafnium target. Piezoforce microscopy (PFM) method proved clear ferroelectric properties in samples with 6 mol. % of Si that were annealed at 800ºC. Ferroelectric samples were poled in opposite directions and embedded in the structure of a cell and an enhancement in photovoltaic properties were observed on the poled samples vs unpoled ones with KPFM and I-V measurements. The current work is funded by the NSF EPSCoR LA-SiGMA project under award #EPS-1003897.
Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek
2016-01-01
Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851
Gholami-Motlagh, Farzaneh; Jouzi, Mina; Soleymani, Bahram
2016-01-01
Background: Anxiety is an inseparable part of our lives and a serious threat to health. Therefore, it is necessary to use certain strategies to prevent disorders caused by anxiety and adjust the vital signs of people. Swedish massage is one of the most recognized techniques for reducing anxiety. This study aims to compare the effects of two massage techniques on the vital signs and anxiety of healthy women. Materials and Methods: This quasi-experimental study with a two-group, crossover design was conducted on 20 healthy women who were selected by simple sampling method and were randomly assigned to BNC (Back, Neck, and Chest) or LAF (Leg, Arm, and Face) groups. Massage therapy was carried out for a 14-week period (two 4-week massage therapy sessions and 6 weeks washout stage). Gathered data were analyzed using paired t-test with a significance level of P < 0.05. Results: Both BNC and LAF methods caused a significant decrease in systolic BP in the first stage (P = 0.02, 0.00); however, diastolic BP showed significant decrease only in BNC group (P = 0.01). The mean average of body temperature of LAF group showed a significant decrease in the first stage (P = 0.0.3), and pulse and respiratory rate showed significant decrease in both groups during the second stage (P = 0.00). In addition, anxiety scores showed no significant difference before and after massage therapy (P < 0.05). Conclusions: Massage therapy caused a decrease in systolic BP, pulse, and respiratory rate. It can be concluded that massage therapy was useful for decreasing the vital signs associated with anxiety in healthy women. PMID:27563325
Hypersonic airbreathing vehicle visions and enhancing technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, J.L.; Lockwood, M.K.; Petley, D.H.
1997-01-01
This paper addresses the visions for hypersonic airbreathing vehicles and the advanced technologies that forge and enhance the designs. The matrix includes space access vehicles (single-stage-to-orbit (SSTO), two-stage-to-orbit (2STO) and three-stage-to-orbit (3STO)) and endoatmospheric vehicles (airplanes{emdash}missiles are omitted). The characteristics, the performance potential, the technologies and the synergies will be discussed. A common design constraint is that all vehicles (space access and endoatmospheric) have enclosed payload bays. {copyright} {ital 1997 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Stolik, S.; Fabila, D. A.; de la Rosa, J. M.; Escobedo, G.; Suárez-Álvarez, K.; Tomás, S. A.
2015-09-01
Design of non-invasive and accurate novel methods for liver fibrosis diagnosis has gained growing interest. Different stages of liver fibrosis were induced in Wistar rats by intraperitoneally administering different doses of carbon tetrachloride. The liver fibrosis degree was conventionally determined by means of histological examination. An open-photoacoustic-cell (OPC) technique for the assessment of liver fibrosis was developed and is reported here. The OPC technique is based on the fact that the thermal diffusivity can be accurately measured by photoacoustics taking into consideration the photoacoustic signal amplitude versus the modulation frequency. This technique measures directly the heat generated in a sample, due to non-radiative de-excitation processes, following the absorption of light. The thermal diffusivity was measured with a home-made open-photoacoustic-cell system that was specially designed to perform the measurement from ex vivo liver samples. The human liver tissue showed a significant increase in the thermal diffusivity depending on the fibrosis stage. Specifically, liver samples from rats exhibiting hepatic fibrosis showed a significantly higher value of the thermal diffusivity than for control animals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malinauskas, M.; Purlys, V.; Zukauskas, A.
2010-11-10
We present a femtosecond Laser Two-Photon Polymerization (LTPP) system of large scale three-dimensional structuring for applications in tissue engineering. The direct laser writing system enables fabrication of artificial polymeric scaffolds over a large area (up to cm in lateral size) with sub-micrometer resolution which could find practical applications in biomedicine and surgery. Yb:KGW femtosecond laser oscillator (Pharos, Light Conversion. Co. Ltd.) is used as an irradiation source (75 fs, 515 nm (frequency doubled), 80 MHz). The sample is mounted on wide range linear motor driven stages having 10 nm sample positioning resolution (XY--ALS130-100, Z--ALS130-50, Aerotech, Inc.). These stages guarantee anmore » overall travelling range of 100 mm into X and Y directions and 50 mm in Z direction and support the linear scanning speed up to 300 mm/s. By moving the sample three-dimensionally the position of laser focus in the photopolymer is changed and one is able to write complex 3D (three-dimensional) structures. An illumination system and CMOS camera enables online process monitoring. Control of all equipment is automated via custom made computer software ''3D-Poli'' specially designed for LTPP applications. Structures can be imported from computer aided design STereoLihography (stl) files or programmed directly. It can be used for rapid LTPP structuring in various photopolymers (SZ2080, AKRE19, PEG-DA-258) which are known to be suitable for bio-applications. Microstructured scaffolds can be produced on different substrates like glass, plastic and metal. In this paper, we present microfabricated polymeric scaffolds over a large area and growing of adult rabbit myogenic stem cells on them. Obtained results show the polymeric scaffolds to be applicable for cell growth practice. It exhibit potential to use it for artificial pericardium in the experimental model in the future.« less
NASA Astrophysics Data System (ADS)
Malinauskas, M.; Purlys, V.; Žukauskas, A.; Rutkauskas, M.; Danilevičius, P.; Paipulas, D.; Bičkauskaitė, G.; Bukelskis, L.; Baltriukienė, D.; Širmenis, R.; Gaidukevičiutė, A.; Bukelskienė, V.; Gadonas, R.; Sirvydis, V.; Piskarskas, A.
2010-11-01
We present a femtosecond Laser Two-Photon Polymerization (LTPP) system of large scale three-dimensional structuring for applications in tissue engineering. The direct laser writing system enables fabrication of artificial polymeric scaffolds over a large area (up to cm in lateral size) with sub-micrometer resolution which could find practical applications in biomedicine and surgery. Yb:KGW femtosecond laser oscillator (Pharos, Light Conversion. Co. Ltd.) is used as an irradiation source (75 fs, 515 nm (frequency doubled), 80 MHz). The sample is mounted on wide range linear motor driven stages having 10 nm sample positioning resolution (XY—ALS130-100, Z—ALS130-50, Aerotech, Inc.). These stages guarantee an overall travelling range of 100 mm into X and Y directions and 50 mm in Z direction and support the linear scanning speed up to 300 mm/s. By moving the sample three-dimensionally the position of laser focus in the photopolymer is changed and one is able to write complex 3D (three-dimensional) structures. An illumination system and CMOS camera enables online process monitoring. Control of all equipment is automated via custom made computer software "3D-Poli" specially designed for LTPP applications. Structures can be imported from computer aided design STereoLihography (stl) files or programmed directly. It can be used for rapid LTPP structuring in various photopolymers (SZ2080, AKRE19, PEG-DA-258) which are known to be suitable for bio-applications. Microstructured scaffolds can be produced on different substrates like glass, plastic and metal. In this paper, we present microfabricated polymeric scaffolds over a large area and growing of adult rabbit myogenic stem cells on them. Obtained results show the polymeric scaffolds to be applicable for cell growth practice. It exhibit potential to use it for artificial pericardium in the experimental model in the future.
Reduced Power Laer Designation Systems
2008-06-20
200KD, Ri = = 60Kfl, and R 2 = R4 = 2K yields an overall transimpedance gain of 200K x 30 x 30 = 180MV/A. Figure 3. Three stage photodiode amplifier ...transistor circuit for bootstrap buffering of the input stage, comparing the noise performance of the candidate amplifier designs, selecting the two...transistor bootstrap design as the circuit of choice, and comparing the performance of this circuit against that of a basic transconductance amplifier
Sekiguchi, Yuki; Yamamoto, Masaki; Oroguchi, Tomotaka; Takayama, Yuki; Suzuki, Shigeyuki; Nakasako, Masayoshi
2014-11-01
Using our custom-made diffraction apparatus KOTOBUKI-1 and two multiport CCD detectors, cryogenic coherent X-ray diffraction imaging experiments have been undertaken at the SPring-8 Angstrom Compact free electron LAser (SACLA) facility. To efficiently perform experiments and data processing, two software suites with user-friendly graphical user interfaces have been developed. The first is a program suite named IDATEN, which was developed to easily conduct four procedures during experiments: aligning KOTOBUKI-1, loading a flash-cooled sample into the cryogenic goniometer stage inside the vacuum chamber of KOTOBUKI-1, adjusting the sample position with respect to the X-ray beam using a pair of telescopes, and collecting diffraction data by raster scanning the sample with X-ray pulses. Named G-SITENNO, the other suite is an automated version of the original SITENNO suite, which was designed for processing diffraction data. These user-friendly software suites are now indispensable for collecting a large number of diffraction patterns and for processing the diffraction patterns immediately after collecting data within a limited beam time.
NASA Astrophysics Data System (ADS)
Pape, Thomas; Hohnberg, Hans-Jürgen; Wunsch, David; Anders, Erik; Freudenthal, Tim; Huhn, Katrin; Bohrmann, Gerhard
2017-11-01
Pressure barrels for sampling and preservation of submarine sediments under in situ pressure with the robotic sea-floor drill rig MeBo (Meeresboden-Bohrgerät) housed at the MARUM (Bremen, Germany) were developed. Deployments of the so-called MDP
(MeBo pressure vessel) during two offshore expeditions off New Zealand and off Spitsbergen, Norway, resulted in the recovery of sediment cores with pressure stages equaling in situ hydrostatic pressure. While initially designed for the quantification of gas and gas-hydrate contents in submarine sediments, the MDP also allows for analysis of the sediments under in situ pressure with methods typically applied by researchers from other scientific fields (geotechnics, sedimentology, microbiology, etc.). Here we report on the design and operational procedure of the MDP and demonstrate full functionality by presenting the first results from pressure-core degassing and molecular gas analysis.
Baires-Campos, Felipe-Eduardo; Jimbo, Ryo; Fonseca-Oliveira, Maiolino-Thomaz; Moura, Camila; Zanetta-Barbosa, Darceny; Coelho, Paulo-Guilherme
2015-01-01
Background This study histologically evaluated two implant designs: a classic thread design versus another specifically designed for healing chamber formation placed with two drilling protocols. Material and Methods Forty dental implants (4.1 mm diameter) with two different macrogeometries were inserted in the tibia of 10 Beagle dogs, and maximum insertion torque was recorded. Drilling techniques were: until 3.75 mm (regular-group); and until 4.0 mm diameter (overdrilling-group) for both implant designs. At 2 and 4 weeks, samples were retrieved and processed for histomorphometric analysis. For torque and BIC (bone-to-implant contact) and BAFO (bone area fraction occupied), a general-linear model was employed including instrumentation technique and time in vivo as independent. Results The insertion torque recorded for each implant design and drilling group significantly decreased as a function of increasing drilling diameter for both implant designs (p<0.001). No significant differences were detected between implant designs for each drilling technique (p>0.18). A significant increase in BIC was observed from 2 to 4 weeks for both implants placed with the overdrilling technique (p<0.03) only, but not for those placed in the 3.75 mm drilling sites (p>0.32). Conclusions Despite the differences between implant designs and drilling technique an intramembranous-like healing mode with newly formed woven bone prevailed. Key words: Histomorphometry, biomechanical, in vivo, initial stability, insertion torque, osseointegration. PMID:25858087
Controlling aliased dynamics in motion systems? An identification for sampled-data control approach
NASA Astrophysics Data System (ADS)
Oomen, Tom
2014-07-01
Sampled-data control systems occasionally exhibit aliased resonance phenomena within the control bandwidth. The aim of this paper is to investigate the aspect of these aliased dynamics with application to a high performance industrial nano-positioning machine. This necessitates a full sampled-data control design approach, since these aliased dynamics endanger both the at-sample performance and the intersample behaviour. The proposed framework comprises both system identification and sampled-data control. In particular, the sampled-data control objective necessitates models that encompass the intersample behaviour, i.e., ideally continuous time models. Application of the proposed approach on an industrial wafer stage system provides a thorough insight and new control design guidelines for controlling aliased dynamics.
Study of Alternate Space Shuttle Concepts. Volume 2. Part 1: Concept Analysis and Definition
NASA Technical Reports Server (NTRS)
1971-01-01
Three different space shuttle systems have been defined and analyzed. The first is a stage-and-one-half system optimized to meet program requirements. The second is a two-stage, fully reusable system also designed to meet program requirements. The third is a convertible system which operates initially as a stage-and-one-half system and is subsequently converted to a two-stage, fully reusable system by reconfiguration of the orbiter vehicle and development of a booster vehicle. The design and performance of this third system must necessarily be compromised somewhat to facilitate the conversion. For each system, the applicable requirements, ground rules, and assumptions are defined. The characteristics of each system are listed and a detailed description and analysis of the system are presented. Finally, a cost analysis for the system is given.
A two-stage rotary blood pump design with potentially lower blood trauma: a computational study.
Thamsen, Bente; Mevert, Ricardo; Lommel, Michael; Preikschat, Philip; Gaebler, Julia; Krabatsch, Thomas; Kertzscher, Ulrich; Hennig, Ewald; Affeld, Klaus
2016-06-15
In current rotary blood pumps, complications related to blood trauma due to shear stresses are still frequently observed clinically. Reducing the rotor tip speed might decrease blood trauma. Therefore, the aim of this project was to design a two-stage rotary blood pump leading to lower shear stresses. Using the principles of centrifugal pumps, two diagonal rotor stages were designed with an outer diameter of 22 mm. The first stage begins with a flow straightener and terminates with a diffusor, while a volute casing behind the second stage is utilized to guide fluid to the outlet. Both stages are combined into one rotating part which is pivoted by cup-socket ruby bearings. Details of the flow field were analyzed employing computational fluid dynamics (CFD). A functional model of the pump was fabricated and the pressure-flow dependency was experimentally assessed. Measured pressure-flow performance of the developed pump indicated its ability to generate adequate pressure heads and flows with characteristic curves similar to centrifugal pumps. According to the CFD results, a pressure of 70 mmHg was produced at a flow rate of 5 L/min and a rotational speed of 3200 rpm. Circumferential velocities could be reduced to 3.7 m/s as compared to 6.2 m/s in a clinically used axial rotary blood pump. Flow fields were smooth with well-distributed pressure fields and comparatively few recirculation or vortices. Substantially smaller volumes were exposed to high shear stresses >150 Pa. Hence, blood trauma might be reduced with this design. Based on these encouraging results, future in vitro investigations to investigate actual blood damage are intended.
A Method of Visualizing Three-Dimensional Distribution of Yeast in Bread Dough
NASA Astrophysics Data System (ADS)
Maeda, Tatsurou; Do, Gab-Soo; Sugiyama, Junichi; Oguchi, Kosei; Shiraga, Seizaburou; Ueda, Mitsuyoshi; Takeya, Koji; Endo, Shigeru
A novel technique was developed to monitor the change in three-dimensional (3D) distribution of yeast in frozen bread dough samples in accordance with the progress of mixing process. Application of a surface engineering technology allowed the identification of yeast in bread dough by bonding EGFP (Enhanced Green Fluorescent Protein) to the surface of yeast cells. The fluorescent yeast (a biomarker) was recognized as bright spots at the wavelength of 520 nm. A Micro-Slicer Image Processing System (MSIPS) with a fluorescence microscope was utilized to acquire cross-sectional images of frozen dough samples sliced at intervals of 1 μm. A set of successive two-dimensional images was reconstructed to analyze 3D distribution of yeast. Samples were taken from each of four normal mixing stages (i.e., pick up, clean up, development, and final stages) and also from over mixing stage. In the pick up stage yeast distribution was uneven with local areas of dense yeast. As the mixing progressed from clean up to final stages, the yeast became more evenly distributed throughout the dough sample. However, the uniformity in yeast distribution was lost in the over mixing stage possibly due to the breakdown of gluten structure within the dough sample.
Visualization and quantification of three-dimensional distribution of yeast in bread dough.
Maeda, Tatsuro; DO, Gab-Soo; Sugiyama, Junichi; Araki, Tetsuya; Tsuta, Mizuki; Shiraga, Seizaburo; Ueda, Mitsuyoshi; Yamada, Masaharu; Takeya, Koji; Sagara, Yasuyuki
2009-07-01
A three-dimensional (3-D) bio-imaging technique was developed for visualizing and quantifying the 3-D distribution of yeast in frozen bread dough samples in accordance with the progress of the mixing process of the samples, applying cell-surface engineering to the surfaces of the yeast cells. The fluorescent yeast was recognized as bright spots at the wavelength of 520 nm. Frozen dough samples were sliced at intervals of 1 microm by an micro-slicer image processing system (MSIPS) equipped with a fluorescence microscope for acquiring cross-sectional images of the samples. A set of successive two-dimensional images was reconstructed to analyze the 3-D distribution of the yeast. The average shortest distance between centroids of enhanced green fluorescent protein (EGFP) yeasts was 10.7 microm at the pick-up stage, 9.7 microm at the clean-up stage, 9.0 microm at the final stage, and 10.2 microm at the over-mixing stage. The results indicated that the distribution of the yeast cells was the most uniform in the dough of white bread at the final stage, while the heterogeneous distribution at the over-mixing stage was possibly due to the destruction of the gluten network structure within the samples.
Conceptual Sound System Design for Clifford Odets' "GOLDEN BOY"
NASA Astrophysics Data System (ADS)
Yang, Yen Chun
There are two different aspects in the process of sound design, "Arts" and "Science". In my opinion, the sound design should engage both aspects strongly and in interaction with each other. I started the process of designing the sound for GOLDEN BOY by building the city soundscape of New York City in 1937. The scenic design for this piece is designed in the round, putting the audience all around the stage; this gave me a great opportunity to use surround and specialization techniques to transform the space into a different sonic world. My specialization design is composed of two subsystems -- one is the four (4) speakers center cluster diffusing towards the four (4) sections of audience, and the other is the four (4) speakers on the four (4) corners of the theatre. The outside ring provides rich sound source localization and the inside ring provides more support for control of the specialization details. In my design four (4) lavalier microphones are hung under the center iron cage from the four (4) corners of the stage. Each microphone is ten (10) feet above the stage. The signal for each microphone is sent to the two (2) center speakers in the cluster diagonally opposite the microphone. With the appropriate level adjustment of the microphones, the audience will not notice the amplification of the voices; however, through my specialization system, the presence and location of the voices of all actors are preserved for all audiences clearly. With such vocal reinforcements provided by the microphones, I no longer need to worry about overwhelming the dialogue on stage by the underscoring. A successful sound system design should not only provide a functional system, but also take the responsibility of bringing actors' voices to the audience and engaging the audience with the world that we create on stage. By designing a system which reinforces the actors' voices while at the same time providing control over localization of movement of sound effects, I was able not only to make the text present and clear for the audiences, but also to support the storyline strongly through my composed music, environmental soundscapes, and underscoring.
Genome-wide Analysis of Genetic Loci Associated with Alzheimer’s Disease
Seshadri, Sudha; Fitzpatrick, Annette L.; Arfan Ikram, M; DeStefano, Anita L.; Gudnason, Vilmundur; Boada, Merce; Bis, Joshua C.; Smith, Albert V.; Carassquillo, Minerva M.; Charles Lambert, Jean; Harold, Denise; Schrijvers, Elisabeth M. C.; Ramirez-Lorca, Reposo; Debette, Stephanie; Longstreth, W.T.; Janssens, A. Cecile J.W.; Shane Pankratz, V.; Dartigues, Jean François; Hollingworth, Paul; Aspelund, Thor; Hernandez, Isabel; Beiser, Alexa; Kuller, Lewis H.; Koudstaal, Peter J.; Dickson, Dennis W.; Tzourio, Christophe; Abraham, Richard; Antunez, Carmen; Du, Yangchun; Rotter, Jerome I.; Aulchenko, Yurii S.; Harris, Tamara B.; Petersen, Ronald C.; Berr, Claudine; Owen, Michael J.; Lopez-Arrieta, Jesus; Varadarajan, Badri N.; Becker, James T.; Rivadeneira, Fernando; Nalls, Michael A.; Graff-Radford, Neill R.; Campion, Dominique; Auerbach, Sanford; Rice, Kenneth; Hofman, Albert; Jonsson, Palmi V.; Schmidt, Helena; Lathrop, Mark; Mosley, Thomas H.; Au, Rhoda; Psaty, Bruce M.; Uitterlinden, Andre G.; Farrer, Lindsay A.; Lumley, Thomas; Ruiz, Agustin; Williams, Julie; Amouyel, Philippe; Younkin, Steve G.; Wolf, Philip A.; Launer, Lenore J.; Lopez, Oscar L.; van Duijn, Cornelia M.; Breteler, Monique M. B.
2010-01-01
Context Genome wide association studies (GWAS) have recently identified CLU, PICALM and CR1 as novel genes for late-onset Alzheimer’s disease (AD). Objective In a three-stage analysis of new and previously published GWAS on over 35000 persons (8371 AD cases), we sought to identify and strengthen additional loci associated with AD and confirm these in an independent sample. We also examined the contribution of recently identified genes to AD risk prediction. Design, Setting, and Participants We identified strong genetic associations (p<10−3) in a Stage 1 sample of 3006 AD cases and 14642 controls by combining new data from the population-based Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) consortium (1367 AD cases (973 incident)) with previously reported results from the Translational Genomics Research Institute (TGEN) and Mayo AD GWAS. We identified 2708 single nucleotide polymorphisms (SNPs) with p-values<10−3, and in Stage 2 pooled results for these SNPs with the European AD Initiative (2032 cases, 5328 controls) to identify ten loci with p-values<10−5. In Stage 3, we combined data for these ten loci with data from the Genetic and Environmental Risk in AD consortium (3333 cases, 6995 controls) to identify four SNPs with a p-value<1.7×10−8. These four SNPs were replicated in an independent Spanish sample (1140 AD cases and 1209 controls). Main outcome measure Alzheimer’s Disease. Results We showed genome-wide significance for two new loci: rs744373 near BIN1 (OR:1.13; 95%CI:1.06–1.21 per copy of the minor allele; p=1.6×10−11) and rs597668 near EXOC3L2/BLOC1S3/MARK4 (OR:1.18; 95%CI1.07–1.29; p=6.5×10−9). Associations of CLU, PICALM, BIN1 and EXOC3L2 with AD were confirmed in the Spanish sample (p<0.05). However, CLU and PICALM did not improve incident AD prediction beyond age, sex, and APOE (improvement in area under receiver-operating-characteristic curve <0.003). Conclusions Two novel genetic loci for AD are reported that for the first time reach genome-wide statistical significance; these findings were replicated in an independent population. Two recently reported associations were also confirmed, but these loci did not improve AD risk prediction, although they implicate biological pathways that may be useful targets for potential interventions. PMID:20460622
Flexible sequential designs for multi-arm clinical trials.
Magirr, D; Stallard, N; Jaki, T
2014-08-30
Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.
Design options for advanced manned launch systems
NASA Astrophysics Data System (ADS)
Freeman, Delma C.; Talay, Theodore A.; Stanley, Douglas O.; Lepsch, Roger A.; Wilhite, Alan W.
1995-03-01
Various concepts for advanced manned launch systems are examined for delivery missions to space station and polar orbit. Included are single-and two-stage winged systems with rocket and/or air-breathing propulsion systems. For near-term technologies, two-stage reusable rocket systems are favored over single-stage rocket or two-stage air-breathing/rocket systems. Advanced technologies enable viable single-stage-to-orbit (SSTO) concepts. Although two-stage rocket systems continue to be lighter in dry weight than SSTO vehicles, advantages in simpler operations may make SSTO vehicles more cost-effective over the life cycle. Generally, rocket systems maintain a dry-weight advantage over air-breathing systems at the advanced technology levels, but to a lesser degree than when near-term technologies are used. More detailed understanding of vehicle systems and associated ground and flight operations requirements and procedures is essential in determining quantitative discrimination between these latter concepts.
Design and dynamic analysis of a piezoelectric linear stage for pipetting liquid samples
NASA Astrophysics Data System (ADS)
Yu-Jen, Wang; Chien, Lee; Yi-Bin, Jiang; Kuo-Chieh, Fu
2017-06-01
Piezoelectric actuators have been widely used in positioning stages because of their compact size, stepping controllability, and holding force. This study proposes a piezoelectric-driven stage composed of a bi-electrode piezoelectric slab, capacitive position sensor, and capillary filling detector for filling liquid samples into nanopipettes using capillary flow. This automatic sample-filling device is suitable for transmission electron microscopy image-based quantitative analysis of aqueous products with added nanoparticles. The step length of the actuator is adjusted by a pulse width modulation signal that depends on the stage position; the actuator stops moving once the capillary filling has been detected. A novel dynamic model of the piezoelectric-driven stage based on collision interactions between the piezoelectric actuator and the sliding clipper is presented. Unknown model parameters are derived from the steady state solution of the equivalent steady phase angle. The output force of the piezoelectric actuator is formulated using the impulse and momentum principle. Considering the applied forces and related velocity between the sliding clipper and the piezoelectric slab, the stage dynamic response is confirmed with the experimental results. Moreover, the model can be used to explain the in-phase slanted trajectories of piezoelectric slab to drive sliders, but not elliptical trajectories. The maximum velocity and minimum step length of the piezoelectric-driven stage are 130 mm s-1 and 1 μm respectively.
Skates, Steven J.; Gillette, Michael A.; LaBaer, Joshua; Carr, Steven A.; Anderson, N. Leigh; Liebler, Daniel C.; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L.; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R.; Rodriguez, Henry; Boja, Emily S.
2014-01-01
Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC), with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance, and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step towards building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research. PMID:24063748
Skates, Steven J; Gillette, Michael A; LaBaer, Joshua; Carr, Steven A; Anderson, Leigh; Liebler, Daniel C; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R; Rodriguez, Henry; Boja, Emily S
2013-12-06
Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor, and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung, and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC) with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step toward building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research.
Users manual for updated computer code for axial-flow compressor conceptual design
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1992-01-01
An existing computer code that determines the flow path for an axial-flow compressor either for a given number of stages or for a given overall pressure ratio was modified for use in air-breathing engine conceptual design studies. This code uses a rapid approximate design methodology that is based on isentropic simple radial equilibrium. Calculations are performed at constant-span-fraction locations from tip to hub. Energy addition per stage is controlled by specifying the maximum allowable values for several aerodynamic design parameters. New modeling was introduced to the code to overcome perceived limitations. Specific changes included variable rather than constant tip radius, flow path inclination added to the continuity equation, input of mass flow rate directly rather than indirectly as inlet axial velocity, solution for the exact value of overall pressure ratio rather than for any value that met or exceeded it, and internal computation of efficiency rather than the use of input values. The modified code was shown to be capable of computing efficiencies that are compatible with those of five multistage compressors and one fan that were tested experimentally. This report serves as a users manual for the revised code, Compressor Spanline Analysis (CSPAN). The modeling modifications, including two internal loss correlations, are presented. Program input and output are described. A sample case for a multistage compressor is included.
NASA Astrophysics Data System (ADS)
Halbe, Johannes; Pahl-Wostl, Claudia; Adamowski, Jan
2018-01-01
Multiple barriers constrain the widespread application of participatory methods in water management, including the more technical focus of most water agencies, additional cost and time requirements for stakeholder involvement, as well as institutional structures that impede collaborative management. This paper presents a stepwise methodological framework that addresses the challenges of context-sensitive initiation, design and institutionalization of participatory modeling processes. The methodological framework consists of five successive stages: (1) problem framing and stakeholder analysis, (2) process design, (3) individual modeling, (4) group model building, and (5) institutionalized participatory modeling. The Management and Transition Framework is used for problem diagnosis (Stage One), context-sensitive process design (Stage Two) and analysis of requirements for the institutionalization of participatory water management (Stage Five). Conceptual modeling is used to initiate participatory modeling processes (Stage Three) and ensure a high compatibility with quantitative modeling approaches (Stage Four). This paper describes the proposed participatory model building (PMB) framework and provides a case study of its application in Québec, Canada. The results of the Québec study demonstrate the applicability of the PMB framework for initiating and designing participatory model building processes and analyzing barriers towards institutionalization.
Two-stage, dilute sulfuric acid hydrolysis of wood : an investigation of fundamentals
John F. Harris; Andrew J. Baker; Anthony H. Conner; Thomas W. Jeffries; James L. Minor; Roger C. Pettersen; Ralph W. Scott; Edward L Springer; Theodore H. Wegner; John I. Zerbe
1985-01-01
This paper presents a fundamental analysis of the processing steps in the production of methanol from southern red oak (Quercus falcata Michx.) by two-stage dilute sulfuric acid hydrolysis. Data for hemicellulose and cellulose hydrolysis are correlated using models. This information is used to develop and evaluate a process design.
Workie, Demeke Lakew; Zike, Dereje Tesfaye; Fenta, Haile Mekonnen; Mekonnen, Mulusew Admasu
2017-09-01
Unintended pregnancy related to unmet need is a worldwide problem that affects societies. The main objective of this study was to identify the prevalence and determinants of unmet need for family planning among women aged (15-49) in Ethiopia. The Performance Monitoring and Accountability2020/Ethiopia was conducted in April 2016 at round-4 from 7494 women with two-stage-stratified sampling. Bi-variable and multi-variable binary logistic regression model with complex sampling design was fitted. The prevalence of unmet-need for family planning was 16.2% in Ethiopia. Women between the age range of 15-24 years were 2.266 times more likely to have unmet need family planning compared to above 35 years. Women who were currently married were about 8 times more likely to have unmet need family planning compared to never married women. Women who had no under-five child were 0.125 times less likely to have unmet need family planning compared to those who had more than two-under-5. The key determinants of unmet need family planning in Ethiopia were residence, age, marital-status, education, household members, birth-events and number of under-5 children. Thus the Government of Ethiopia would take immediate steps to address the causes of high unmet need for family planning among women.
Kim, Sung-Jin; Reidy, Shaelah M; Block, Bruce P; Wise, Kensall D; Zellers, Edward T; Kurabayashi, Katsuo
2010-07-07
In comprehensive two-dimensional gas chromatography (GC x GC), a modulator is placed at the juncture between two separation columns to focus and re-inject eluting mixture components, thereby enhancing the resolution and the selectivity of analytes. As part of an effort to develop a microGC x microGC prototype, in this report we present the design, fabrication, thermal operation, and initial testing of a two-stage microscale thermal modulator (microTM). The microTM contains two sequential serpentine Pyrex-on-Si microchannels (stages) that cryogenically trap analytes eluting from the first-dimension column and thermally inject them into the second-dimension column in a rapid, programmable manner. For each modulation cycle (typically 5 s for cooling with refrigeration work of 200 J and 100 ms for heating at 10 W), the microTM is kept approximately at -50 degrees C by a solid-state thermoelectric cooling unit placed within a few tens of micrometres of the device, and heated to 250 degrees C at 2800 degrees C s(-1) by integrated resistive microheaters and then cooled back to -50 degrees C at 250 degrees C s(-1). Thermal crosstalk between the two stages is less than 9%. A lumped heat transfer model is used to analyze the device design with respect to the rates of heating and cooling, power dissipation, and inter-stage thermal crosstalk as a function of Pyrex-membrane thickness, air-gap depth, and stage separation distance. Experimental results are in agreement with trends predicted by the model. Preliminary tests using a conventional capillary column interfaced to the microTM demonstrate the capability for enhanced sensitivity and resolution as well as the modulation of a mixture of alkanes.
NASA Astrophysics Data System (ADS)
Chen, Jing; Qiu, Xiaojie; Yin, Cunyi; Jiang, Hao
2018-02-01
An efficient method to design the broadband gain-flattened Raman fiber amplifier with multiple pumps is proposed based on least squares support vector regression (LS-SVR). A multi-input multi-output LS-SVR model is introduced to replace the complicated solving process of the nonlinear coupled Raman amplification equation. The proposed approach contains two stages: offline training stage and online optimization stage. During the offline stage, the LS-SVR model is trained. Owing to the good generalization capability of LS-SVR, the net gain spectrum can be directly and accurately obtained when inputting any combination of the pump wavelength and power to the well-trained model. During the online stage, we incorporate the LS-SVR model into the particle swarm optimization algorithm to find the optimal pump configuration. The design results demonstrate that the proposed method greatly shortens the computation time and enhances the efficiency of the pump parameter optimization for Raman fiber amplifier design.
Social relationships and physiological determinants of longevity across the human life span.
Yang, Yang Claire; Boen, Courtney; Gerken, Karen; Li, Ting; Schorpp, Kristen; Harris, Kathleen Mullan
2016-01-19
Two decades of research indicate causal associations between social relationships and mortality, but important questions remain as to how social relationships affect health, when effects emerge, and how long they last. Drawing on data from four nationally representative longitudinal samples of the US population, we implemented an innovative life course design to assess the prospective association of both structural and functional dimensions of social relationships (social integration, social support, and social strain) with objectively measured biomarkers of physical health (C-reactive protein, systolic and diastolic blood pressure, waist circumference, and body mass index) within each life stage, including adolescence and young, middle, and late adulthood, and compare such associations across life stages. We found that a higher degree of social integration was associated with lower risk of physiological dysregulation in a dose-response manner in both early and later life. Conversely, lack of social connections was associated with vastly elevated risk in specific life stages. For example, social isolation increased the risk of inflammation by the same magnitude as physical inactivity in adolescence, and the effect of social isolation on hypertension exceeded that of clinical risk factors such as diabetes in old age. Analyses of multiple dimensions of social relationships within multiple samples across the life course produced consistent and robust associations with health. Physiological impacts of structural and functional dimensions of social relationships emerge uniquely in adolescence and midlife and persist into old age.
1989-10-03
Georgia. TESTED 8y CHEcKED BY REPORTED BY, C PHONE 0 WIRE *GVJ DE SAMPLED BY OATE : SAO FORM IS$ Page of 20 24 OCT So Previous editions of this fore are...block for bto; line system. Cos- - $514.00 PO)8 - Second Stage Dike VE - Mod D Change in design for portion of 2nd stage cofferdike near tie-in with...ATION (Coordinteat~ Station)N14; _______________________________________12. MANUFACTURER’S DESIGNATION OF DRILL. 3. DRILLING AGENCY d~l gjr GvA.IIw1
ERIC Educational Resources Information Center
Al-Zahrani, Mona Abdullah Bakheet
2011-01-01
The current study aimed at investigating the effectiveness of keyword-based instruction in enhancing English vocabulary achievement and retention of intermediate stage pupils with different working memory capacities. The study adopted a quasi experimental design employing two groups (experimental and control). The design included an independent…
Rho-Isp Revisited and Basic Stage Mass Estimating for Launch Vehicle Conceptual Sizing Studies
NASA Technical Reports Server (NTRS)
Kibbey, Timothy P.
2015-01-01
The ideal rocket equation is manipulated to demonstrate the essential link between propellant density and specific impulse as the two primary stage performance drivers for a launch vehicle. This is illustrated by examining volume-limited stages such as first stages and boosters. This proves to be a good approximation for first-order or Phase A vehicle design studies for solid rocket motors and for liquid stages, except when comparing to hydrogen-fueled stages. A next-order mass model is developed that is able to model the mass differences between hydrogen-fueled and other stages. Propellants considered range in density from liquid methane to inhibited red fuming nitric acid. Calculated comparisons are shown for solid rocket boosters, liquid first stages, liquid upper stages, and a balloon-deployed single-stage-to-orbit concept. The derived relationships are ripe for inclusion in a multi-stage design space exploration and optimization algorithm, as well as for single-parameter comparisons such as those shown herein.
Two-stage high frequency pulse tube refrigerator with base temperature below 10 K
NASA Astrophysics Data System (ADS)
Chen, Liubiao; Wu, Xianlin; Liu, Sixue; Zhu, Xiaoshuang; Pan, Changzhao; Guo, Jia; Zhou, Yuan; Wang, Junjie
2017-12-01
This paper introduces our recent experimental results of pulse tube refrigerator driven by linear compressor. The working frequency is 23-30 Hz, which is much higher than the G-M type cooler (the developed cryocooler will be called high frequency pulse tube refrigerator in this paper). To achieve a temperature below 10 K, two types of two-stage configuration, gas coupled and thermal coupled, have been designed, built and tested. At present, both types can achieve a no-load temperature below 10 K by using only one compressor. As to gas-coupled HPTR, the second stage can achieve a cooling power of 16 mW/10K when the first stage applied a 400 mW heat load at 60 K with a total input power of 400 W. As to thermal-coupled HPTR, the designed cooling power of the first stage is 10W/80K, and then the temperature of the second stage can get a temperature below 10 K with a total input power of 300 W. In the current preliminary experiment, liquid nitrogen is used to replace the first coaxial configuration as the precooling stage, and a no-load temperature 9.6 K can be achieved with a stainless steel mesh regenerator. Using Er3Ni sphere with a diameter about 50-60 micron, the simulation results show it is possible to achieve a temperature below 8 K. The configuration, the phase shifters and the regenerative materials of the developed two types of two-stage high frequency pulse tube refrigerator will be discussed, and some typical experimental results and considerations for achieving a better performance will also be presented in this paper.
Efficient and lightweight current leads
NASA Astrophysics Data System (ADS)
Bromberg, L.; Dietz, A. J.; Michael, P. C.; Gold, C.; Cheadle, M.
2014-01-01
Current leads generate substantial cryogenic heat loads in short length High Temperature Superconductor (HTS) distribution systems. Thermal conduction, as well as Joule losses (I2R) along the current leads, comprises the largest cryogenic loads for short distribution systems. Current leads with two temperature stages have been designed, constructed and tested, with the goal of minimizing the electrical power consumption, and to provide thermal margin for the cable. We present the design of a two-stage current lead system, operating at 140 K and 55 K. This design is very attractive when implemented with a turbo-Brayton cycle refrigerator (two-stage), with substantial power and weight reduction. A heat exchanger is used at each temperature station, with conduction-cooled stages in-between. Compact, efficient heat exchangers are challenging, because of the gaseous coolant. Design, optimization and performance of the heat exchangers used for the current leads will be presented. We have made extensive use of CFD models for optimizing hydraulic and thermal performance of the heat exchangers. The methodology and the results of the optimization process will be discussed. The use of demountable connections between the cable and the terminations allows for ease of assembly, but require means of aggressively cooling the region of the joint. We will also discuss the cooling of the joint. We have fabricated a 7 m, 5 kA cable with second generation HTS tapes. The performance of the system will be described.
Investigation on a thermal-coupled two-stage Stirling-type pulse tube cryocooler
NASA Astrophysics Data System (ADS)
Yang, Luwei
2008-11-01
Multi-stage Stirling-type pulse tube cryocoolers with high frequency (30-60 Hz) are one important direction in recent years. A two-stage Stirling-type pulse tube cryocooler with thermally coupled stages has been designed and established two years ago and some results have been published. In order to study the effect of first stage precooling temperature, related characteristics on performance are experimentally investigated. It shows that at high input power, when the precooling temperature is lower than 110 K, its effect on second stage temperature is quite small. There is also the evident effect of precooling temperature on pulse tube temperature distribution; this is for the first time that author notice the phenomenon. The mean working pressure is investigated and the 12.8 K lowest temperature with 500 W input power and 1.22 MPa average pressure have been gained, this is the lowest reported temperature for high frequency two-stage PTCS. Simulation has reflected upper mentioned typical features in experiments.
Development of a 4 K Separate Two-Stage Pulse Tube Refrigerator with High Efficiency
NASA Astrophysics Data System (ADS)
Qiu, L. M.; He, Y. L.; Gan, Z. H.; Chen, G. B.
2006-04-01
Compared to the traditional 4 K cryocoolers, the separate 4 K pulse tube refrigerator (PTR) consists of two independent PTRs, which are thermally connected between the cold end of the first stage and some middle position of the second stage regenerator. It is possible to use different frequency, valve timing, phase shifter and even compressor for each stage for better cooling performance. A 4 K separate two-stage PTR was designed and manufactured. The first stage was separately optimized. A minimum temperature of 12.6 K and cooling capacity of 59.0 W at 40 K was achieved for the first stage by adding some Er3Ni at the cold part of the regenerator. An experimental investigation of valve timing effects on the cooling performance of the 4 K separate two-stage PTR is reported. The experiments show that the optimization of valve timing can considerably improve the cooling performance of the PTR. Cooling capacity of 0.59 W at 4.2 K and 15.4 W at 37.0 K were achieved with an actual input power of 6.6 kW. Effect of frequency on the performance of the separate two-stage PTR is also presented.
Eteng, Akaa Agbaeze; Abdul Rahim, Sharul Kamal; Leow, Chee Yen; Chew, Beng Wah; Vandenbosch, Guy A E
2016-01-01
Q-factor constraints are usually imposed on conductor loops employed as proximity range High Frequency Radio Frequency Identification (HF-RFID) reader antennas to ensure adequate data bandwidth. However, pairing such low Q-factor loops in inductive energy transmission links restricts the link transmission performance. The contribution of this paper is to assess the improvement that is reached with a two-stage design method, concerning the transmission performance of a planar square loop relative to an initial design, without compromise to a Q-factor constraint. The first stage of the synthesis flow is analytical in approach, and determines the number and spacing of turns by which coupling between similar paired square loops can be enhanced with low deviation from the Q-factor limit presented by an initial design. The second stage applies full-wave electromagnetic simulations to determine more appropriate turn spacing and widths to match the Q-factor constraint, and achieve improved coupling relative to the initial design. Evaluating the design method in a test scenario yielded a more than 5% increase in link transmission efficiency, as well as an improvement in the link fractional bandwidth by more than 3%, without violating the loop Q-factor limit. These transmission performance enhancements are indicative of a potential for modifying proximity HF-RFID reader antennas for efficient inductive energy transfer and data telemetry links.
NASA Astrophysics Data System (ADS)
Kasthurirengan, Srinivasan; Behera, Upendra; Nadig, D. S.; Krishnamoorthy, V.
2012-06-01
Single and two-stage Pulse Tube Cryocoolers (PTC) have been designed, fabricated and experimentally studied. The single stage PTC reaches a no-load temperature of ~ 29 K at its cold end, the two-stage PTC reaches ~ 2.9 K in its second stage cold end and ~ 60 K in its first stage cold end. The two-stage Pulse Tube Cryocooler provides a cooling power of ~ 250 mW at 4.2 K. The single stage system uses stainless steel meshes along with Pb granules as its regenerator materials, while the two-stage PTC uses combinations of Pb along with Er3Ni / HoCu2 as the second stage regenerator materials. Normally, the above systems are insulated by thermal radiation shields and mounted inside a vacuum chamber which is maintained at high vacuum. To evaluate the performance of these systems in the possible conditions of loss of vacuum with and without radiation shields, experimental studies have been performed. The heat-in-leak under such severe conditions has been estimated from the heat load characteristics of the respective stages. The experimental results are analyzed to obtain surface emissivities and effective thermal conductivities as a function of interspace pressure.
NASA Technical Reports Server (NTRS)
Steinke, Ronald J.
1989-01-01
The Rai ROTOR1 code for two-dimensional, unsteady viscous flow analysis was applied to a supersonic throughflow fan stage design. The axial Mach number for this fan design increases from 2.0 at the inlet to 2.9 at the outlet. The Rai code uses overlapped O- and H-grids that are appropriately packed. The Rai code was run on a Cray XMP computer; then data postprocessing and graphics were performed to obtain detailed insight into the stage flow. The large rotor wakes uniformly traversed the rotor-stator interface and dispersed as they passed through the stator passage. Only weak blade shock losses were computerd, which supports the design goals. High viscous effects caused large blade wakes and a low fan efficiency. Rai code flow predictions were essentially steady for the rotor, and they compared well with Chima rotor viscous code predictions based on a C-grid of similar density.
Design and Construction Process of Two LEED Certified University Buildings: A Collective Case Study
ERIC Educational Resources Information Center
Rich, Kim
2011-01-01
This study was conducted at the early stages of integrating LEED into the design process in which a clearer understanding of what sustainable and ecological design was about became evident through the duration of designing and building of two academic buildings on a university campus. In this case study, due to utilizing a grounded theory…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less
E-books: nurse faculty use and concerns.
Abell, Cathy H; Garrett-Wright, Dawn
2014-01-01
The purpose of this study was to identify nurse educators' stage of concern regarding e-books and examine relationships between stage of concern and demographic variables. The use of e-books is growing, and nursing faculty must be prepared to use this evolving technology. A descriptive design was used with a convenience sample of 50 nurse educators attending a professional conference. Data were collected using a demographic questionnaire and the Stages of Concern (SoC) questionnaire. Sixty-four percent of participants' first high stage was Stage 0 (awareness); 22 percent had a first high stage of Stage 1 (informational). Using ordinal regression, no statistical significance was noted with the highest Stage of Concern and age (p = .431) or experience as a nurse educator (p = .893). Findings indicate low usage, faculty concerns, and the need for ongoing education regarding e-books.
NASA Astrophysics Data System (ADS)
Danish, Syed Noman; Qureshi, Shafiq Rehman; EL-Leathy, Abdelrahman; Khan, Salah Ud-Din; Umer, Usama; Ma, Chaochen
2014-12-01
Extensive numerical investigations of the performance and flow structure in an unshrouded tandem-bladed centrifugal compressor are presented in comparison to a conventional compressor. Stage characteristics are explored for various tip clearance levels, axial spacings and circumferential clockings. Conventional impeller was modified to tandem-bladed design with no modifications in backsweep angle, meridional gas passage and camber distributions in order to have a true comparison with conventional design. Performance degradation is observed for both the conventional and tandem designs with increase in tip clearance. Linear-equation models for correlating stage characteristics with tip clearance are proposed. Comparing two designs, it is clearly evident that the conventional design shows better performance at moderate flow rates. However; near choke flow, tandem design gives better results primarily because of the increase in throat area. Surge point flow rate also seems to drop for tandem compressor resulting in increased range of operation.
A robust two-stage design identifying the optimal biological dose for phase I/II clinical trials.
Zang, Yong; Lee, J Jack
2017-01-15
We propose a robust two-stage design to identify the optimal biological dose for phase I/II clinical trials evaluating both toxicity and efficacy outcomes. In the first stage of dose finding, we use the Bayesian model averaging continual reassessment method to monitor the toxicity outcomes and adopt an isotonic regression method based on the efficacy outcomes to guide dose escalation. When the first stage ends, we use the Dirichlet-multinomial distribution to jointly model the toxicity and efficacy outcomes and pick the candidate doses based on a three-dimensional volume ratio. The selected candidate doses are then seamlessly advanced to the second stage for dose validation. Both toxicity and efficacy outcomes are continuously monitored so that any overly toxic and/or less efficacious dose can be dropped from the study as the trial continues. When the phase I/II trial ends, we select the optimal biological dose as the dose obtaining the minimal value of the volume ratio within the candidate set. An advantage of the proposed design is that it does not impose a monotonically increasing assumption on the shape of the dose-efficacy curve. We conduct extensive simulation studies to examine the operating characteristics of the proposed design. The simulation results show that the proposed design has desirable operating characteristics across different shapes of the underlying true dose-toxicity and dose-efficacy curves. The software to implement the proposed design is available upon request. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Sherrell, Darren A.; Foster, Andrew J.; Hudson, Lee; ...
2015-01-01
The design and implementation of a compact and portable sample alignment system suitable for use at both synchrotron and free-electron laser (FEL) sources and its performance are described. The system provides the ability to quickly and reliably deliver large numbers of samples using the minimum amount of sample possible, through positioning of fixed target arrays into the X-ray beam. The combination of high-precision stages, high-quality sample viewing, a fast controller and a software layer overcome many of the challenges associated with sample alignment. A straightforward interface that minimizes setup and sample changeover time as well as simplifying communication with themore » stages during the experiment is also described, together with an intuitive naming convention for defining, tracking and locating sample positions. Lastly, the setup allows the precise delivery of samples in predefined locations to a specific position in space and time, reliably and simply.« less
Confidence Preserving Machine for Facial Action Unit Detection
Zeng, Jiabei; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Xiong, Zhang
2016-01-01
Facial action unit (AU) detection from video has been a long-standing problem in automated facial expression analysis. While progress has been made, accurate detection of facial AUs remains challenging due to ubiquitous sources of errors, such as inter-personal variability, pose, and low-intensity AUs. In this paper, we refer to samples causing such errors as hard samples, and the remaining as easy samples. To address learning with the hard samples, we propose the Confidence Preserving Machine (CPM), a novel two-stage learning framework that combines multiple classifiers following an “easy-to-hard” strategy. During the training stage, CPM learns two confident classifiers. Each classifier focuses on separating easy samples of one class from all else, and thus preserves confidence on predicting each class. During the testing stage, the confident classifiers provide “virtual labels” for easy test samples. Given the virtual labels, we propose a quasi-semi-supervised (QSS) learning strategy to learn a person-specific (PS) classifier. The QSS strategy employs a spatio-temporal smoothness that encourages similar predictions for samples within a spatio-temporal neighborhood. In addition, to further improve detection performance, we introduce two CPM extensions: iCPM that iteratively augments training samples to train the confident classifiers, and kCPM that kernelizes the original CPM model to promote nonlinearity. Experiments on four spontaneous datasets GFT [15], BP4D [56], DISFA [42], and RU-FACS [3] illustrate the benefits of the proposed CPM models over baseline methods and state-of-the-art semisupervised learning and transfer learning methods. PMID:27479964
An aerodynamic design and numerical investigation of transonic centrifugal compressor stage
NASA Astrophysics Data System (ADS)
Yi, Weilin; Ji, Lucheng; Tian, Yong; Shao, Weiwei; Li, Weiwei; Xiao, Yunhan
2011-09-01
In the present paper, the design of a transonic centrifugal compressor stage with the inlet relative Mach number about 1.3 and detailed flow field investigation by three-dimensional CFD are described. Firstly the CFD program was validated by an experimental case. Then the preliminary aerodynamic design of stage completed through in-house one-dimensional code. Three types of impellers and two sets of stages were computed and analyzed. It can be found that the swept shape of leading edge has prominent influence on the performance and can enlarge the flow range. Similarly, the performance of the stage with swept impeller is better than others. The total pressure ratio and adiabatic efficiency of final geometry achieve 7:1 and 80% respectively. The vane diffuser with same airfoils along span increases attack angle at higher span, and the local flow structure and performance is deteriorated.
Planning and design of a knowledge based system for green manufacturing management
NASA Astrophysics Data System (ADS)
Kamal Mohd Nawawi, Mohd; Mohd Zuki Nik Mohamed, Nik; Shariff Adli Aminuddin, Adam
2013-12-01
This paper presents a conceptual design approach to the development of a hybrid Knowledge Based (KB) system for Green Manufacturing Management (GMM) at the planning and design stages. The research concentrates on the GMM by using a hybrid KB system, which is a blend of KB system and Gauging Absences of Pre-requisites (GAP). The hybrid KB/GAP system identifies all potentials elements of green manufacturing management issues throughout the development of this system. The KB system used in the planning and design stages analyses the gap between the existing and the benchmark organizations for an effective implementation through the GAP analysis technique. The proposed KBGMM model at the design stage explores two components, namely Competitive Priority and Lean Environment modules. Through the simulated results, the KBGMM System has identified, for each modules and sub-module, the problem categories in a prioritized manner. The System finalized all the Bad Points (BP) that need to be improved to achieve benchmark implementation of GMM at the design stage. The System provides valuable decision making information for the planning and design a GMM in term of business organization.
Experimental studies of two-stage centrifugal dust concentrator
NASA Astrophysics Data System (ADS)
Vechkanova, M. V.; Fadin, Yu M.; Ovsyannikov, Yu G.
2018-03-01
The article presents data of experimental results of two-stage centrifugal dust concentrator, describes its design, and shows the development of a method of engineering calculation and laboratory investigations. For the experiments, the authors used quartz, ceramic dust and slag. Experimental dispersion analysis of dust particles was obtained by sedimentation method. To build a mathematical model of the process, dust collection was built using central composite rotatable design of the four factorial experiment. A sequence of experiments was conducted in accordance with the table of random numbers. Conclusion were made.
Ontiveros-Valencia, Aura; Tang, Youneng; Zhao, He-Ping; Friese, David; Overstreet, Ryan; Smith, Jennifer; Evans, Patrick; Rittmann, Bruce E; Krajmalnik-Brown, Rosa
2014-07-01
We studied the microbial community structure of pilot two-stage membrane biofilm reactors (MBfRs) designed to reduce nitrate (NO3(-)) and perchlorate (ClO4(-)) in contaminated groundwater. The groundwater also contained oxygen (O2) and sulfate (SO4(2-)), which became important electron sinks that affected the NO3(-) and ClO4(-) removal rates. Using pyrosequencing, we elucidated how important phylotypes of each "primary" microbial group, i.e., denitrifying bacteria (DB), perchlorate-reducing bacteria (PRB), and sulfate-reducing bacteria (SRB), responded to changes in electron-acceptor loading. UniFrac, principal coordinate analysis (PCoA), and diversity analyses documented that the microbial community of biofilms sampled when the MBfRs had a high acceptor loading were phylogenetically distant from and less diverse than the microbial community of biofilm samples with lower acceptor loadings. Diminished acceptor loading led to SO4(2-) reduction in the lag MBfR, which allowed Desulfovibrionales (an SRB) and Thiothrichales (sulfur-oxidizers) to thrive through S cycling. As a result of this cooperative relationship, they competed effectively with DB/PRB phylotypes such as Xanthomonadales and Rhodobacterales. Thus, pyrosequencing illustrated that while DB, PRB, and SRB responded predictably to changes in acceptor loading, a decrease in total acceptor loading led to important shifts within the "primary" groups, the onset of other members (e.g., Thiothrichales), and overall greater diversity.
Topin, Sylvain; Greau, Claire; Deliere, Ludovic; Hovesepian, Alexandre; Taffary, Thomas; Le Petit, Gilbert; Douysset, Guilhem; Moulin, Christophe
2015-11-01
The SPALAX (Système de Prélèvement Automatique en Ligne avec l'Analyse du Xénon) is one of the systems used in the International Monitoring System of the Comprehensive Nuclear Test Ban Treaty (CTBT) to detect radioactive xenon releases following a nuclear explosion. Approximately 10 years after the industrialization of the first system, the CEA has developed the SPALAX New Generation, SPALAX-NG, with the aim of increasing the global sensitivity and reducing the overall size of the system. A major breakthrough has been obtained by improving the sampling stage and the purification/concentration stage. The sampling stage evolution consists of increasing the sampling capacity and improving the gas treatment efficiency across new permeation membranes, leading to an increase in the xenon production capacity by a factor of 2-3. The purification/concentration stage evolution consists of using a new adsorbent Ag@ZSM-5 (or Ag-PZ2-25) with a much larger xenon retention capacity than activated charcoal, enabling a significant reduction in the overall size of this stage. The energy consumption of the system is similar to that of the current SPALAX system. The SPALAX-NG process is able to produce samples of almost 7 cm(3) of xenon every 12 h, making it the most productive xenon process among the IMS systems. Copyright © 2015 Elsevier Ltd. All rights reserved.
An Efficient, Highly Flexible Multi-Channel Digital Downconverter Architecture
NASA Technical Reports Server (NTRS)
Goodhart, Charles E.; Soriano, Melissa A.; Navarro, Robert; Trinh, Joseph T.; Sigman, Elliott H.
2013-01-01
In this innovation, a digital downconverter has been created that produces a large (16 or greater) number of output channels of smaller bandwidths. Additionally, this design has the flexibility to tune each channel independently to anywhere in the input bandwidth to cover a wide range of output bandwidths (from 32 MHz down to 1 kHz). Both the flexibility in channel frequency selection and the more than four orders of magnitude range in output bandwidths (decimation rates from 32 to 640,000) presented significant challenges to be solved. The solution involved breaking the digital downconversion process into a two-stage process. The first stage is a 2 oversampled filter bank that divides the whole input bandwidth as a real input signal into seven overlapping, contiguous channels represented with complex samples. Using the symmetry of the sine and cosine functions in a similar way to that of an FFT (fast Fourier transform), this downconversion is very efficient and gives seven channels fixed in frequency. An arbitrary number of smaller bandwidth channels can be formed from second-stage downconverters placed after the first stage of downconversion. Because of the overlapping of the first stage, there is no gap in coverage of the entire input bandwidth. The input to any of the second-stage downconverting channels has a multiplexer that chooses one of the seven wideband channels from the first stage. These second-stage downconverters take up fewer resources because they operate at lower bandwidths than doing the entire downconversion process from the input bandwidth for each independent channel. These second-stage downconverters are each independent with fine frequency control tuning, providing extreme flexibility in positioning the center frequency of a downconverted channel. Finally, these second-stage downconverters have flexible decimation factors over four orders of magnitude The algorithm was developed to run in an FPGA (field programmable gate array) at input data sampling rates of up to 1,280 MHz. The current implementation takes a 1,280-MHz real input, and first breaks it up into seven 160-MHz complex channels, each spaced 80 MHz apart. The eighth channel at baseband was not required for this implementation, and led to more optimization. Afterwards, 16 second stage narrow band channels with independently tunable center frequencies and bandwidth settings are implemented A future implementation in a larger Xilinx FPGA will hold up to 32 independent second-stage channels.
Design study for a staged Very Large Hadron Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peter J. Limon et al.
Advancing accelerator designs and technology to achieve the highest energies has enabled remarkable discoveries in particle physics. This report presents the results of a design study for a new collider at Fermilab that will create exceptional opportunities for particle physics--a two-stage very large hadron collider. In its first stage, the machine provides a facility for energy-frontier particle physics research, at an affordable cost and on a reasonable time scale. In a second-stage upgrade in the same tunnel, the VLHC offers the possibility of reaching 100 times the collision energy of the Tevatron. The existing Fermilab accelerator complex serves as themore » injector, and the collision halls are on the Fermilab site. The Stage-1 VLHC reaches a collision energy of 40 TeV and a luminosity comparable to that of the LHC, using robust superferric magnets of elegant simplicity housed in a large-circumference tunnel. The Stage-2 VLHC, constructed after the scientific potential of the first stage has been fully realized, reaches a collision energy of at least 175 TeV with the installation of high-field magnets in the same tunnel. It makes optimal use of the infrastructure developed for the Stage-1 machine, using the Stage-1 accelerator itself as the injector. The goals of this study, commissioned by the Fermilab Director in November 2000, are: to create reasonable designs for the Stage-1 and Stage-2 VLHC in the same tunnel; to discover the technical challenges and potential impediments to building such a facility at Fermilab; to determine the approximate costs of the major elements of the Stage-1 VLHC; and to identify areas requiring significant R and D to establish the basis for the design.« less
Wyatt, Gwen; Sikorskii, Alla; Rahbar, Mohammad Hossein; Victorson, David; You, Mei
2013-01-01
Purpose/Objectives To evaluate the safety and efficacy of reflexology, a complementary therapy that applies pressure to specific areas of the feet. Design Longitudinal, randomized clinical trial. Setting Thirteen community-based medical oncology clinics across the midwestern United States. Sample A convenience sample of 385 predominantly Caucasian women with advanced-stage breast cancer receiving chemotherapy and/or hormonal therapy. Methods Following the baseline interview, women were randomized into three primary groups: reflexology (n = 95), lay foot manipulation (LFM) (n = 95), or conventional care (n = 96). Two preliminary reflexology (n = 51) and LFM (n = 48) test groups were used to establish the protocols. Participants were interviewed again postintervention at study weeks 5 and 11. Main Research Variables Breast cancer–specific health-related quality of life (HRQOL), physical functioning, and symptoms. Findings No adverse events were reported. A longitudinal comparison revealed significant improvements in physical functioning for the reflexology group compared to the control group (p = 0.04). Severity of dyspnea was reduced in the reflexology group compared to the control group (p < 0.01) and the LFM group (p = 0.02). No differences were found on breast cancer–specific HRQOL, depressive symptomatology, state anxiety, pain, and nausea. Conclusions Reflexology may be added to existing evidence-based supportive care to improve HRQOL for patients with advanced-stage breast cancer during chemotherapy and/or hormonal therapy. Implications for Nursing Reflexology can be recommended for safety and usefulness in relieving dyspnea and enhancing functional status among women with advanced-stage breast cancer. PMID:23107851
Atchison, Christina Joanne; Mulhern, Emma; Kapiga, Saidi; Nsanya, Mussa Kelvin; Crawford, Emily E; Mussa, Mohammed; Bottomley, Christian; Hargreaves, James R; Doyle, Aoife Margaret
2018-05-31
Nigeria, Ethiopia and Tanzania have some of the highest teenage pregnancy rates and lowest rates of modern contraceptive use among adolescents. The transdisciplinary Adolescents 360 (A360) initiative being rolled out across these three countries uses human-centred design to create context-specific multicomponent interventions with the aim of increasing voluntary modern contraceptive use among girls aged 15-19 years. The primary objective of the outcome evaluation is to assess the impact of A360 on the modern contraceptive prevalence rate (mCPR) among sexually active girls aged 15-19 years. A360 targets different subpopulations of adolescent girls in the three countries. In Northern Nigeria and Ethiopia, the study population is married girls aged 15-19 years. In Southern Nigeria, the study population is unmarried girls aged 15-19 years. In Tanzania, both married and unmarried girls aged 15-19 years will be included in the study. In all settings, we will use a prepopulation and postpopulation-based cross-sectional survey design. In Nigeria, the study design will also include a comparison group. A one-stage sampling design will be used in Nigeria and Ethiopia. A two-stage sampling design will be used in Tanzania. Questionnaires will be administered face-to-face by female interviewers aged between 18 and 26 years. Study outcomes will be assessed before the start of A360 implementation in late 2017 and approximately 24 months after implementation in late 2019. Findings of this study will be widely disseminated through workshops, conference presentations, reports, briefings, factsheets and academic publications. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Cryogenic Liquid Sample Acquisition System for Remote Space Applications
NASA Technical Reports Server (NTRS)
Mahaffy, Paul; Trainer, Melissa; Wegel, Don; Hawk, Douglas; Melek, Tony; Johnson, Christopher; Amato, Michael; Galloway, John
2013-01-01
There is a need to acquire autonomously cryogenic hydrocarbon liquid sample from remote planetary locations such as the lakes of Titan for instruments such as mass spectrometers. There are several problems that had to be solved relative to collecting the right amount of cryogenic liquid sample into a warmer spacecraft, such as not allowing the sample to boil off or fractionate too early; controlling the intermediate and final pressures within carefully designed volumes; designing for various particulates and viscosities; designing to thermal, mass, and power-limited spacecraft interfaces; and reducing risk. Prior art inlets for similar instruments in spaceflight were designed primarily for atmospheric gas sampling and are not useful for this front-end application. These cryogenic liquid sample acquisition system designs for remote space applications allow for remote, autonomous, controlled sample collections of a range of challenging cryogenic sample types. The design can control the size of the sample, prevent fractionation, control pressures at various stages, and allow for various liquid sample levels. It is capable of collecting repeated samples autonomously in difficult lowtemperature conditions often found in planetary missions. It is capable of collecting samples for use by instruments from difficult sample types such as cryogenic hydrocarbon (methane, ethane, and propane) mixtures with solid particulates such as found on Titan. The design with a warm actuated valve is compatible with various spacecraft thermal and structural interfaces. The design uses controlled volumes, heaters, inlet and vent tubes, a cryogenic valve seat, inlet screens, temperature and cryogenic liquid sensors, seals, and vents to accomplish its task.
NASA Astrophysics Data System (ADS)
Miller, David P.; Bonaccorsi, Rosalba; Davis, Kiel
2008-10-01
Mars Astrobiology Research and Technology Experiment (MARTE) investigators used an automated drill and sample processing hardware to detect and categorize life-forms found in subsurface rock at Río Tinto, Spain. For the science to be successful, it was necessary for the biomass from other sources -- whether from previously processed samples (cross contamination) or the terrestrial environment (forward contamination) -- to be insignificant. The hardware and practices used in MARTE were designed around this problem. Here, we describe some of the design issues that were faced and classify them into problems that are unique to terrestrial tests versus problems that would also exist for a system that was flown to Mars. Assessment of the biomass at various stages in the sample handling process revealed mixed results; the instrument design seemed to minimize cross contamination, but contamination from the surrounding environment sometimes made its way onto the surface of samples. Techniques used during the MARTE Río Tinto project, such as facing the sample, appear to remove this environmental contamination without introducing significant cross contamination from previous samples.
Miller, David P; Bonaccorsi, Rosalba; Davis, Kiel
2008-10-01
Mars Astrobiology Research and Technology Experiment (MARTE) investigators used an automated drill and sample processing hardware to detect and categorize life-forms found in subsurface rock at Río Tinto, Spain. For the science to be successful, it was necessary for the biomass from other sources--whether from previously processed samples (cross contamination) or the terrestrial environment (forward contamination)-to be insignificant. The hardware and practices used in MARTE were designed around this problem. Here, we describe some of the design issues that were faced and classify them into problems that are unique to terrestrial tests versus problems that would also exist for a system that was flown to Mars. Assessment of the biomass at various stages in the sample handling process revealed mixed results; the instrument design seemed to minimize cross contamination, but contamination from the surrounding environment sometimes made its way onto the surface of samples. Techniques used during the MARTE Río Tinto project, such as facing the sample, appear to remove this environmental contamination without introducing significant cross contamination from previous samples.
2013-01-01
Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725
Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello
2013-10-26
Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.
Epoch-based Entropy for Early Screening of Alzheimer's Disease.
Houmani, N; Dreyfus, G; Vialatte, F B
2015-12-01
In this paper, we introduce a novel entropy measure, termed epoch-based entropy. This measure quantifies disorder of EEG signals both at the time level and spatial level, using local density estimation by a Hidden Markov Model on inter-channel stationary epochs. The investigation is led on a multi-centric EEG database recorded from patients at an early stage of Alzheimer's disease (AD) and age-matched healthy subjects. We investigate the classification performances of this method, its robustness to noise, and its sensitivity to sampling frequency and to variations of hyperparameters. The measure is compared to two alternative complexity measures, Shannon's entropy and correlation dimension. The classification accuracies for the discrimination of AD patients from healthy subjects were estimated using a linear classifier designed on a development dataset, and subsequently tested on an independent test set. Epoch-based entropy reached a classification accuracy of 83% on the test dataset (specificity = 83.3%, sensitivity = 82.3%), outperforming the two other complexity measures. Furthermore, it was shown to be more stable to hyperparameter variations, and less sensitive to noise and sampling frequency disturbances than the other two complexity measures.
Verification and Validation of Digitally Upgraded Control Rooms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boring, Ronald; Lau, Nathan
2015-09-01
As nuclear power plants undertake main control room modernization, a challenge is the lack of a clearly defined human factors process to follow. Verification and validation (V&V) as applied in the nuclear power community has tended to involve efforts such as integrated system validation, which comes at the tail end of the design stage. To fill in guidance gaps and create a step-by-step process for control room modernization, we have developed the Guideline for Operational Nuclear Usability and Knowledge Elicitation (GONUKE). This approach builds on best practices in the software industry, which prescribe an iterative user-centered approach featuring multiple cyclesmore » of design and evaluation. Nuclear regulatory guidance for control room design emphasizes summative evaluation—which occurs after the design is complete. In the GONUKE approach, evaluation is also performed at the formative stage of design—early in the design cycle using mockups and prototypes for evaluation. The evaluation may involve expert review (e.g., software heuristic evaluation at the formative stage and design verification against human factors standards like NUREG-0700 at the summative stage). The evaluation may also involve user testing (e.g., usability testing at the formative stage and integrated system validation at the summative stage). An additional, often overlooked component of evaluation is knowledge elicitation, which captures operator insights into the system. In this report we outline these evaluation types across design phases that support the overall modernization process. The objective is to provide industry-suitable guidance for steps to be taken in support of the design and evaluation of a new human-machine interface (HMI) in the control room. We suggest the value of early-stage V&V and highlight how this early-stage V&V can help improve the design process for control room modernization. We argue that there is a need to overcome two shortcomings of V&V in current practice—the propensity for late-stage V&V and the use of increasingly complex psychological assessment measures for V&V.« less
Orbit on demand - Will cost determine best design?
NASA Technical Reports Server (NTRS)
Macconochie, J. O.; Mackley, E. A.; Morris, S. J.; Phillips, W. P.; Breiner, C. A.; Scotti, S. J.
1985-01-01
Eleven design concepts for vertical (V) and horizontal (H) take-off launch-on-demand manned orbital vehicles are discussed. Attention is given to up to three stages, Mach numbers (sub-, 2, or 3), expendable boosters, drop tanks (DT), and storable (S) or cryogenic fuels. All the concepts feature lifting bodies with circular cross-section and most have a 7 ft diam, 15 ft long payload bay as well as a crew compartment. Expendable elements impose higher costs and in some cases reduce all-azimuth launch capabilities. Single-stage vehicles simplify the logistics whether in H or V configuration. A two-stage H vehicle offers launch offset for the desired orbital plane before firing the rocket engines after take-off and subsonic acceleration. A two-stage fully reusable V form has the second lowest weight of the vehicles studied and an all-azimuth launch capability. Better definition of the prospective mission requirements is needed before choosing among the alternatives.
Narváez, Lola; Cunill, Conrad; Cáceres, Rafaela; Marfà, Oriol
2011-06-01
Nursery leachates usually contain high concentrations of nitrates, phosphorus and potassium, so discharging them into the environment often causes pollution. Single-stage or two-stage horizontal subsurface flow constructed wetlands (HSSCW) filled with different substrates were designed to evaluate the effect and evolution over time of the removal of nitrogen and other nutrients contained in nursery leachates. The addition of sodium acetate to achieve a C:NO(3)(-)-N ratio of 3:1 was sufficient to reach complete denitrification in all HSSCW. The removal rate of nitrate was high throughout the operation period (over 98%). Nevertheless, the removal rate of ammonium decreased about halfway through the operation. Removal of the COD was enhanced by the use of two-stage HSSCW. In general, the substrates and the number of stages of the wetlands did not affect the removal of nitrogen, total phosphorus and potassium. Copyright © 2011 Elsevier Ltd. All rights reserved.
Super Boiler: Packed Media/Transport Membrane Boiler Development and Demonstration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liss, William E; Cygan, David F
2013-04-17
Gas Technology Institute (GTI) and Cleaver-Brooks developed a new gas-fired steam generation system the Super Boiler for increased energy efficiency, reduced equipment size, and reduced emissions. The system consists of a firetube boiler with a unique staged furnace design, a two-stage burner system with engineered internal recirculation and inter-stage cooling integral to the boiler, unique convective pass design with extended internal surfaces for enhanced heat transfer, and a novel integrated heat recovery system to extract maximum energy from the flue gas. With these combined innovations, the Super Boiler technical goals were set at 94% HHV fuel efficiency, operation on naturalmore » gas with <5 ppmv NOx (referenced to 3%O2), and 50% smaller than conventional boilers of similar steam output. To demonstrate these technical goals, the project culminated in the industrial demonstration of this new high-efficiency technology on a 300 HP boiler at Clement Pappas, a juice bottler located in Ontario, California. The Super Boiler combustion system is based on two stage combustion which combines air staging, internal flue gas recirculation, inter-stage cooling, and unique fuel-air mixing technology to achieve low emissions rather than external flue gas recirculation which is most commonly used today. The two-stage combustion provides lower emissions because of the integrated design of the boiler and combustion system which permit precise control of peak flame temperatures in both primary and secondary stages of combustion. To reduce equipment size, the Super Boiler's dual furnace design increases radiant heat transfer to the furnace walls, allowing shorter overall furnace length, and also employs convective tubes with extended surfaces that increase heat transfer by up to 18-fold compared to conventional bare tubes. In this way, a two-pass boiler can achieve the same efficiency as a traditional three or four-pass firetube boiler design. The Super Boiler is consequently up to 50% smaller in footprint, has a smaller diameter, and is up to 50% lower in weight, resulting in very compact design with reduced material cost and labor costs, while requiring less boiler room floor space. For enhanced energy efficiency, the heat recovery system uses a transport membrane condenser (TMC), a humidifying air heater (HAH), and a split-stage economizer to extract maximum energy from the flue gas. The TMC is a new innovation that pulls a major portion of water vapor produced by the combustion process from the flue gases along with its sensible and latent heat. This results in nearly 100% transfer of heat to the boiler feed water. The HAH improves the effectiveness of the TMC, particularly in steam systems that do not have a large amount of cold makeup water. In addition, the HAH humidifies the combustion air to reduce NOx formation. The split-stage economizer preheats boiler feed water in the same way as a conventional economizer, but extracts more heat by working in tandem with the TMC and HAH to reduce flue gas temperature. These components are designed to work synergistically to achieve energy efficiencies of 92-94% which is 10-15% higher than today's typical firetube boilers.« less
Balanced VS Imbalanced Training Data: Classifying Rapideye Data with Support Vector Machines
NASA Astrophysics Data System (ADS)
Ustuner, M.; Sanli, F. B.; Abdikan, S.
2016-06-01
The accuracy of supervised image classification is highly dependent upon several factors such as the design of training set (sample selection, composition, purity and size), resolution of input imagery and landscape heterogeneity. The design of training set is still a challenging issue since the sensitivity of classifier algorithm at learning stage is different for the same dataset. In this paper, the classification of RapidEye imagery with balanced and imbalanced training data for mapping the crop types was addressed. Classification with imbalanced training data may result in low accuracy in some scenarios. Support Vector Machines (SVM), Maximum Likelihood (ML) and Artificial Neural Network (ANN) classifications were implemented here to classify the data. For evaluating the influence of the balanced and imbalanced training data on image classification algorithms, three different training datasets were created. Two different balanced datasets which have 70 and 100 pixels for each class of interest and one imbalanced dataset in which each class has different number of pixels were used in classification stage. Results demonstrate that ML and NN classifications are affected by imbalanced training data in resulting a reduction in accuracy (from 90.94% to 85.94% for ML and from 91.56% to 88.44% for NN) while SVM is not affected significantly (from 94.38% to 94.69%) and slightly improved. Our results highlighted that SVM is proven to be a very robust, consistent and effective classifier as it can perform very well under balanced and imbalanced training data situations. Furthermore, the training stage should be precisely and carefully designed for the need of adopted classifier.
Changes in resistant starch from two banana cultivars during postharvest storage.
Wang, Juan; Tang, Xue Juan; Chen, Ping Sheng; Huang, Hui Hua
2014-08-01
Banana resistant starch samples were extracted and isolated from two banana cultivars (Musa AAA group, Cavendish subgroup and Musa ABB group, Pisang Awak subgroup) at seven ripening stages during postharvest storage. The structures of the resistant starch samples were analysed by light microscopy, polarising microscopy, scanning electron microscopy, X-ray diffraction, and infrared spectroscopy. Physicochemical properties (e.g., water-holding capacity, solubility, swelling power, transparency, starch-iodine absorption spectrum, and Brabender microviscoamylograph profile) were determined. The results revealed significant differences in microstructure and physicochemical characteristics among the banana resistant starch samples during different ripening stages. The results of this study provide valuable information for the potential applications of banana resistant starches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Wu, Fei; Sioshansi, Ramteen
2017-05-25
Electric vehicles (EVs) hold promise to improve the energy efficiency and environmental impacts of transportation. However, widespread EV use can impose significant stress on electricity-distribution systems due to their added charging loads. This paper proposes a centralized EV charging-control model, which schedules the charging of EVs that have flexibility. This flexibility stems from EVs that are parked at the charging station for a longer duration of time than is needed to fully recharge the battery. The model is formulated as a two-stage stochastic optimization problem. The model captures the use of distributed energy resources and uncertainties around EV arrival timesmore » and charging demands upon arrival, non-EV loads on the distribution system, energy prices, and availability of energy from the distributed energy resources. We use a Monte Carlo-based sample-average approximation technique and an L-shaped method to solve the resulting optimization problem efficiently. We also apply a sequential sampling technique to dynamically determine the optimal size of the randomly sampled scenario tree to give a solution with a desired quality at minimal computational cost. Here, we demonstrate the use of our model on a Central-Ohio-based case study. We show the benefits of the model in reducing charging costs, negative impacts on the distribution system, and unserved EV-charging demand compared to simpler heuristics. Lastly, we also conduct sensitivity analyses, to show how the model performs and the resulting costs and load profiles when the design of the station or EV-usage parameters are changed.« less
ERIC Educational Resources Information Center
Lin, Kuen-Yi; Williams, P. John
2017-01-01
This paper discusses the implementation of a two-stage hands-on technology learning activity, based on Dewey's learning experience theory that is designed to enhance preservice teachers' primary and secondary experiences in developing their competency to solve hands-on problems that apply science and mathematics concepts. The major conclusions…
Polymorphisms in inflammation pathway genes and endometrial cancer risk
Delahanty, Ryan J.; Xiang, Yong-Bing; Spurdle, Amanda; Beeghly-Fadiel, Alicia; Long, Jirong; Thompson, Deborah; Tomlinson, Ian; Yu, Herbert; Lambrechts, Diether; Dörk, Thilo; Goodman, Marc T.; Zheng, Ying; Salvesen, Helga B.; Bao, Ping-Ping; Amant, Frederic; Beckmann, Matthias W.; Coenegrachts, Lieve; Coosemans, An; Dubrowinskaja, Natalia; Dunning, Alison; Runnebaum, Ingo B.; Easton, Douglas; Ekici, Arif B.; Fasching, Peter A.; Halle, Mari K.; Hein, Alexander; Howarth, Kimberly; Gorman, Maggie; Kaydarova, Dylyara; Krakstad, Camilla; Lose, Felicity; Lu, Lingeng; Lurie, Galina; O’Mara, Tracy; Matsuno, Rayna K.; Pharoah, Paul; Risch, Harvey; Corssen, Madeleine; Trovik, Jone; Turmanov, Nurzhan; Wen, Wanqing; Lu, Wei; Cai, Qiuyin; Zheng, Wei; Shu, Xiao-Ou
2013-01-01
Background Experimental and epidemiological evidence have suggested that chronic inflammation may play a critical role in endometrial carcinogenesis. Methods To investigate this hypothesis, a two-stage study was carried out to evaluate single nucleotide polymorphisms (SNPs) in inflammatory pathway genes in association with endometrial cancer risk. In stage 1, 64 candidate pathway genes were identified and 4,542 directly genotyped or imputed SNPs were analyzed among 832 endometrial cancer cases and 2,049 controls, using data from the Shanghai Endometrial Cancer Genetics Study. Linkage disequilibrium of stage 1 SNPs significantly associated with endometrial cancer (P<0.05) indicated that the majority of associations could be linked to one of 24 distinct loci. One SNP from each of the 24 loci was then selected for follow-up genotyping. Of these, 21 SNPs were successfully designed and genotyped in stage 2, which consisted of ten additional studies including 6,604 endometrial cancer cases and 8,511 controls. Results Five of the 21 SNPs had significant allelic odds ratios and 95% confidence intervals as follows: FABP1, 0.92 (0.85-0.99); CXCL3, 1.16 (1.05-1.29); IL6, 1.08 (1.00-1.17); MSR1, 0.90 (0.82-0.98); and MMP9, 0.91 (0.87-0.97). Two of these polymorphisms were independently significant in the replication sample (rs352038 in CXCL3 and rs3918249 in MMP9). The association for the MMP9 polymorphism remained significant after Bonferroni correction and showed a significant association with endometrial cancer in both Asian- and European-ancestry samples. Conclusions These findings lend support to the hypothesis that genetic polymorphisms in genes involved in the inflammatory pathway may contribute to genetic susceptibility to endometrial cancer. Impact Statement This study adds to the growing evidence that inflammation plays an important role in endometrial carcinogenesis. PMID:23221126
NASA Technical Reports Server (NTRS)
1987-01-01
The Advanced Space Design project for 1986-87 was the design of a two stage launch vehicle, representing a second generation space transportation system (STS) which will be needed to support the space station. The first stage is an unmanned winged booster which is fully reusable with a fly back capability. It has jet engines so that it can fly back to the landing site. This adds safety as well as the flexibility to choose alternate landing sites. There are two different second stages. One of the second stages is a manned advanced space shuttle called Space Shuttle II. Space Shuttle II has a payload capability of delivering 40,000 pounds to the space station in low Earth orbit (LEO), and returning 40,000 pounds to Earth. Servicing the space station makes the ability to return a heavy payload to Earth as important as being able to launch a heavy payload. The other second stage is an unmanned heavy lift cargo vehicle with ability to deliver 150,000 pounds of payload to LEO. This vehicle will not return to Earth; however, the engines and electronics can be removed and returned to Earth in the Space Shuttle II. The rest of the vehicle can then be used on orbit for storage or raw materials, supplies, and space manufactured items awaiting transport back to Earth.
Wang, H; Chen, D; Yuan, G; Ma, X; Dai, X
2013-02-01
In this work, the morphological characteristics of waste polyethylene (PE)/polypropylene (PP) plastics during their pyrolysis process were investigated, and based on their basic image changing patterns representative morphological signals describing the pyrolysis stages were obtained. PE and PP granules and films were used as typical plastics for testing, and influence of impurities was also investigated. During pyrolysis experiments, photographs of the testing samples were taken sequentially with a high-speed infrared camera, and the quantitative parameters that describe the morphological characteristics of these photographs were explored using the "Image Pro Plus (v6.3)" digital image processing software. The experimental results showed that plastics pyrolysis involved four stages: melting, two stages of decomposition which are characterized with bubble formation caused by volatile evaporating, and ash deposition; and each stage was characterized with its own phase changing behaviors and morphological features. Two stages of decomposition are the key step of pyrolysis since they took up half or more of the reaction time; melting step consumed another half of reaction time in experiments when raw materials were heated up from ambient temperatures; and coke-like deposition appeared as a result of decomposition completion. Two morphological signals defined from digital image processing, namely, pixel area of the interested reaction region and bubble ratio (BR) caused by volatile evaporating were found to change regularly with pyrolysis stages. In particular, for all experimental scenarios with plastics films and granules, the BR curves always exhibited a slowly drop as melting started and then a sharp increase followed by a deep decrease corresponding to the first stage of intense decomposition, afterwards a second increase - drop section corresponding to the second stage of decomposition appeared. As ash deposition happened, the BR dropped to zero or very low values. When impurities were involved, the shape of BR curves showed that intense decomposition started earlier but morphological characteristics remained the same. In addition, compared to parameters such as pressure, the BR reflects reaction stages better and its change with pyrolysis process of PE/PP plastics with or without impurities was more intrinsically process correlated; therefore it can be adopted as a signal for pyrolysis process characterization, as well as offering guide to process improvement and reactor design. Copyright © 2012 Elsevier Ltd. All rights reserved.
Heggendorn, Fabiano Luiz; Gonçalves, Lucio Souza; Dias, Eliane Pedra; de Oliveira Freitas Lione, Viviane; Lutterbach, Márcia Teresa Soares
2015-08-01
This study assessed the biocorrosive capacity of two bacteria: Desulfovibrio desulfuricans and Desulfovibrio fairfieldensis on endodontic files, as a preliminary step in the development of a biopharmaceutical, to facilitate the removal of endodontic file fragments from root canals. In the first stage, the corrosive potential of the artificial saliva medium (ASM), modified Postgate E medium (MPEM), 2.5 % sodium hypochlorite (NaOCl) solution and white medium (WM), without the inoculation of bacteria was assessed by immersion assays. In the second stage, test samples were inoculated with the two species of sulphur-reducing bacteria (SRB) on ASM and modified artificial saliva medium (MASM). In the third stage, test samples were inoculated with the same species on MPEM, ASM and MASM. All test samples were viewed under an infinite focus Alicona microscope. No test sample became corroded when immersed only in media, without bacteria. With the exception of one test sample between those inoculated with bacteria in ASM and MASM, there was no evidence of corrosion. Fifty percent of the test samples demonstrated a greater intensity of biocorrosion when compared with the initial assays. Desulfovibrio desulfuricans and D. fairfieldensis are capable of promoting biocorrosion of the steel constituent of endodontic files. This study describes the initial development of a biopharmaceutical to facilitate the removal of endodontic file fragments from root canals, which can be successfully implicated in endodontic therapy in order to avoiding parendodontic surgery or even tooth loss in such events.
Ang, Y K; Mirnalini, K; Zalilah, M S
2013-04-01
The use of email and website as channels for workplace health information delivery is not fully explored. This study aims to describe the rationale, design, and baseline findings of an email-linked website intervention to improve modifiable cancer risk factors. Employees of a Malaysian public university were recruited by systematic random sampling and randomised into an intervention (n = 174) or control group (n = 165). A website was developed for the intervention and educational modules were uploaded onto the website. The intervention group received ten consecutive weekly emails with hypertext links to the website for downloading the modules and two individual phone calls as motivational support whilst the control group received none. Diet, lifestyle, anthropometric measurements, psychosocial factors and stages of change related to dietary fat, fruit and vegetable intake, and physical activity were assessed. Participants were predominantly female and in non-academic positions. Obesity was prevalent in 15% and 37% were at risk of co-morbidities. Mean intake of fats was 31%, fruit was -1 serving/day and vegetable was < 1 serving/day. Less than 20% smoked and drank alcohol and about 40% were physically inactive. The majority of the participants fell into the Preparation stage for decreasing fat intake, eating more fruit and vegetables, and increasing physical activity. Self-efficacy and perceived benefits were lowest among participants in the Precontemplation/Contemplation stage compared to the Preparation and Action/Maintenance stages. Baseline data show that dietary and lifestyle practices among the employees did not meet the international guidelines for cancer prevention. Hence the findings warrant the intervention planned.
Lämmle, K; Schwarz, A; Wiesendanger, R
2010-05-01
Here, we present a very small evaporator unit suitable to deposit molecules onto a sample in a cryogenic environment. It can be transported in an ultrahigh vacuum system and loaded into Omicron-type cantilever stages. Thus, molecule deposition inside a low temperature force microscope is possible. The design features an insulating base plate with two embedded electrical contacts and a crucible with low power consumption, which is thermally well isolated from the surrounding. The current is supplied via a removable power clip. Details of the manufacturing process as well as the used material are described. Finally, the performance of the whole setup is demonstrated.
Gender differences in clinical status at time of coronary revascularisation in Spain
Aguilar, M; Lazaro, P; Fitch, K; Luengo, S
2002-01-01
Design: Retrospective study of clinical records. Two stage stratified cluster sampling was used to select a nationally representative sample of patients receiving a coronary revascularisation procedure in 1997. Setting: All of Spain. Main outcome measures: Odds ratios (OR) in men and women for different clinical and diagnostic variables related with coronary disease. A logistic regression model was developed to estimate the association between coronary symptoms and gender. Results: In the univariate analysis the prevalence of the following risk factors for coronary heart disease was higher in women than in men: obesity (OR=1.8), hypertension (OR=2.9) and diabetes (OR=2.1). High surgical risk was also more prevalent among women (OR=2.6). In the logistic regression analysis women's risk of being symptomatic at the time of revascularisation was more than double that of men (OR=2.4). Conclusions: Women have more severe coronary symptoms at the time of coronary revascularisation than do men. These results suggest that women receive revascularisation at a more advanced stage of coronary disease. Further research is needed to clarify what social, cultural or biological factors may be implicated in the gender differences observed. PMID:12080167
May one-stage exchange for Candida albicans peri-prosthetic infection be successful?
Jenny, J-Y; Goukodadja, O; Boeri, C; Gaudias, J
2016-02-01
Fungal infection of a total joint arthroplasty has a low incidence but is generally considered as more difficult to cure than bacterial infection. As for bacterial infection, two-stage exchange is considered as the gold standard of treatment. We report two cases of one-stage total joint exchange for fungal peri-prosthetic infection with Candida albicans, where the responsible pathogens was only identified on intraoperative samples. This situation can be considered as a one-stage exchange for fungal peri-prosthetic infection without preoperative identification of the responsible organism, which is considered as having a poor prognosis. Both cases were free of infection after two years. One-stage revision has several potential advantages over two-stage revision, including shorter hospital stay and rehabilitation, no interim period with significant functional impairment, shorter antibiotic treatment, better functional outcome and probably lower costs. We suggest that one-stage revision for C. albicans peri-prosthetic infection may be successful even without preoperative fungal identification. Level IV-Historical cases. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Lee, ChaBum; Lee, Sun-Kyu; Tarbutton, Joshua A
2014-09-01
This paper presents a novel design and sensitivity analysis of a knife edge-based optical displacement sensor that can be embedded with nanopositioning stages. The measurement system consists of a laser, two knife edge locations, two photodetectors, and axillary optics components in a simple configuration. The knife edge is installed on the stage parallel to its moving direction and two separated laser beams are incident on knife edges. While the stage is in motion, the direct transverse and diffracted light at each knife edge is superposed producing interference at the detector. The interference is measured with two photodetectors in a differential amplification configuration. The performance of the proposed sensor was mathematically modeled, and the effect of the optical and mechanical parameters, wavelength, beam diameter, distances from laser to knife edge to photodetector, and knife edge topography, on sensor outputs was investigated to obtain a novel analytical method to predict linearity and sensitivity. From the model, all parameters except for the beam diameter have a significant influence on measurement range and sensitivity of the proposed sensing system. To validate the model, two types of knife edges with different edge topography were used for the experiment. By utilizing a shorter wavelength, smaller sensor distance and higher edge quality increased measurement sensitivity can be obtained. The model was experimentally validated and the results showed a good agreement with the theoretically estimated results. This sensor is expected to be easily implemented into nanopositioning stage applications at a low cost and mathematical model introduced here can be used for design and performance estimation of the knife edge-based sensor as a tool.
Tiberti, Natalia; Hainard, Alexandre; Lejon, Veerle; Robin, Xavier; Ngoyi, Dieudonné Mumba; Turck, Natacha; Matovu, Enock; Enyaru, John; Ndung'u, Joseph Mathu; Scherl, Alexander; Dayon, Loïc; Sanchez, Jean-Charles
2010-01-01
Human African trypanosomiasis, or sleeping sickness, is a parasitic disease endemic in sub-Saharan Africa, transmitted to humans through the bite of a tsetse fly. The first or hemolymphatic stage of the disease is associated with presence of parasites in the bloodstream, lymphatic system, and body tissues. If patients are left untreated, parasites cross the blood-brain barrier and invade the cerebrospinal fluid and the brain parenchyma, giving rise to the second or meningoencephalitic stage. Stage determination is a crucial step in guiding the choice of treatment, as drugs used for S2 are potentially dangerous. Current staging methods, based on counting white blood cells and demonstrating trypanosomes in cerebrospinal fluid, lack specificity and/or sensitivity. In the present study, we used several proteomic strategies to discover new markers with potential for staging human African trypanosomiasis. Cerebrospinal fluid (CSF) samples were collected from patients infected with Trypanosoma brucei gambiense in the Democratic Republic of Congo. The stage was determined following the guidelines of the national control program. The proteome of the samples was analyzed by two-dimensional gel electrophoresis (n = 9), and by sixplex tandem mass tag (TMT) isobaric labeling (n = 6) quantitative mass spectrometry. Overall, 73 proteins were overexpressed in patients presenting the second stage of the disease. Two of these, osteopontin and β-2-microglobulin, were confirmed to be potential markers for staging human African trypanosomiasis (HAT) by Western blot and ELISA. The two proteins significantly discriminated between S1 and S2 patients with high sensitivity (68% and 78%, respectively) for 100% specificity, and a combination of both improved the sensitivity to 91%. The levels of osteopontin and β-2-microglobulin in CSF of S2 patients (μg/ml range), as well as the fold increased concentration in S2 compared with S1 (3.8 and 5.5 respectively) make the two markers good candidates for the development of a test for staging HAT patients. PMID:20724469
From Paper to Production: An Update on NASA's Upper Stage Engine for Exploration
NASA Technical Reports Server (NTRS)
Kynard, Mike
2010-01-01
In 2006, NASA selected an evolved variant of the proven Saturn/Apollo J-2 upper stage engine to power the Ares I crew launch vehicle upper stage and the Ares V cargo launch vehicle Earth departure stage (EDS) for the Constellation Program. Any design changes needed by the new engine would be based where possible on proven hardware from the Space Shuttle, commercial launchers, and other programs. In addition to the thrust and efficiency requirements needed for the Constellation reference missions, it would be an order of magnitude safer than past engines. It required the J-2X government/industry team to develop the highest performance engine of its type in history and develop it for use in two vehicles for two different missions. In the attempt to achieve these goals in the past five years, the Upper Stage Engine team has made significant progress, successfully passing System Requirements Review (SRR), System Design Review (SDR), Preliminary Design Review (PDR), and Critical Design Review (CDR). As of spring 2010, more than 100,000 experimental and development engine parts have been completed or are in various stages of manufacture. Approximately 1,300 of more than 1,600 engine drawings have been released for manufacturing. This progress has been due to a combination of factors: the heritage hardware starting point, advanced computer analysis, and early heritage and development component testing to understand performance, validate computer modeling, and inform design trades. This work will increase the odds of success as engine team prepares for powerpack and development engine hot fire testing in calendar 2011. This paper will provide an overview of the engine development program and progress to date.
Piira, Anu; van Walsem, Marleen R.; Mikalsen, Geir; Øie, Lars; Frich, Jan C.; Knutsen, Synnove
2014-01-01
Objective: To assess effects of a two year intensive, multidisciplinary rehabilitation program for patients with early- to mid-stage Huntington’s disease. Design: A prospective intervention study. Setting: One inpatient rehabilitation center in Norway. Subjects: 10 patients, with early- to mid-stage Huntington’s disease. Interventions: A two year rehabilitation program, consisting of six admissions of three weeks each, and two evaluation stays approximately three months after the third and sixth rehabilitation admission. The program focused on physical exercise, social activities, and group/teaching sessions. Main outcome measures: Standard measures for motor function, including gait and balance, cognitive function, including MMSE and UHDRS cognitive assessment, anxiety and depression, activities of daily living (ADL), health related quality of life (QoL) and Body Mass Index (BMI). Results: Six out of ten patients completed the full program. Slight, but non-significant, decline was observed for gait and balance from baseline to the evaluation stay after two years. Non-significant improvements were observed in physical QoL, anxiety and depression, and BMI. ADL-function remained stable with no significant decline. None of the cognitive measures showed a significant decline. An analysis of individual cases revealed that four out of the six participants who completed the program sustained or improved their motor function, while motor function declined in two participants. All the six patients who completed the program reported improved or stable QoL throughout the study period. Conclusion: Our findings suggest that participation in an intensive rehabilitation program is well tolerated among motivated patients with early to mid-stage HD. The findings should be interpreted with caution due to the small sample size in this study. PMID:25642382
Off-Design Performance of a Multi-Stage Supersonic Turbine
NASA Technical Reports Server (NTRS)
Dorney, Daniel J.; Griffin, Lisa W.; Huber, Frank; Sondak, Douglas L.
2003-01-01
The drive towards high-work turbines has led to designs which can be compact, transonic, supersonic, counter rotating, or use a dense drive gas. These aggressive designs can lead to strong unsteady secondary flows and flow separation. The amplitude and extent of these unsteady flow phenomena can be amplified at off-design operating conditions. Pre-test off-design predictions have been performed for a new two-stage supersonic turbine design that is currently being tested in air. The simulations were performed using a three-dimensional unsteady Navier-Stokes analysis, and the predicted results have been compared with solutions from a validated meanline analysis.
NASA Astrophysics Data System (ADS)
Amiroh; Priaminiarti, M.; Syahraini, S. I.
2017-08-01
Age estimation of individuals, both dead and living, is important for victim identification and legal certainty. The Demirjian method uses the third molar for age estimation of individuals above 15 years old. The aim is to compare age estimation between 15-25 years using two Demirjian methods. Development stage of third molars in panoramic radiographs of 50 male and female samples were assessed by two observers using Demirjian’s ten stages and two teeth regression formula. Reliability was calculated using Cohen’s kappa coefficient and the significance of the observations was obtained from Wilcoxon tests. Deviations of age estimation were calculated using various methods. The deviation of age estimation with the two teeth regression formula was ±1.090 years; with ten stages, it was ±1.191 years. The deviation of age estimation using the two teeth regression formula was less than with the ten stages method. The age estimations using the two teeth regression formula or the ten stages method are significantly different until the age of 25, but they can be applied up to the age of 22.
Cryogenic system with GM cryocooler for krypton, xenon separation from hydrogen-helium purge gas
NASA Astrophysics Data System (ADS)
Chu, X. X.; Zhang, M. M.; Zhang, D. X.; Xu, D.; Qian, Y.; Liu, W.
2014-01-01
In the thorium molten salt reactor (TMSR), fission products such as krypton, xenon and tritium will be produced continuously in the process of nuclear fission reaction. A cryogenic system with a two stage GM cryocooler was designed to separate Kr, Xe, and H2 from helium purge gas. The temperatures of two stage heat exchanger condensation tanks were maintained at about 38 K and 4.5 K, respectively. The main fluid parameters of heat transfer were confirmed, and the structural heat exchanger equipment and cold box were designed. Designed concentrations after cryogenic separation of Kr, Xe and H2 in helium recycle gas are less than 1 ppb.
Raab, Jennifer; Haupt, Florian; Scholz, Marlon; Matzke, Claudia; Warncke, Katharina; Lange, Karin; Assfalg, Robin; Weininger, Katharina; Wittich, Susanne; Löbner, Stephanie; Beyerlein, Andreas; Nennstiel-Ratzel, Uta; Lang, Martin; Laub, Otto; Dunstheimer, Desiree; Bonifacio, Ezio; Achenbach, Peter; Winkler, Christiane; Ziegler, Anette-G
2016-01-01
Introduction Type 1 diabetes can be diagnosed at an early presymptomatic stage by the detection of islet autoantibodies. The Fr1da study aims to assess whether early staging of type 1 diabetes (1) is feasible at a population-based level, (2) prevents severe metabolic decompensation observed at the clinical manifestation of type 1 diabetes and (3) reduces psychological distress through preventive teaching and care. Methods and analysis Children aged 2–5 years in Bavaria, Germany, will be tested for the presence of multiple islet autoantibodies. Between February 2015 and December 2016, 100 000 children will be screened by primary care paediatricians. Islet autoantibodies are measured in capillary blood samples using a multiplex three-screen ELISA. Samples with ELISA results >97.5th centile are retested using reference radiobinding assays. A venous blood sample is also obtained to confirm the autoantibody status of children with at least two autoantibodies. Children with confirmed multiple islet autoantibodies are diagnosed with pre-type 1 diabetes. These children and their parents are invited to participate in an education and counselling programme at a local diabetes centre. Depression and anxiety, and burden of early diagnosis are also assessed. Results Of the 1027 Bavarian paediatricians, 39.3% are participating in the study. Overall, 26 760 children have been screened between February 2015 and November 2015. Capillary blood collection was sufficient in volume for islet autoantibody detection in 99.46% of the children. The remaining 0.54% had insufficient blood volume collected. Of the 26 760 capillary samples tested, 0.39% were positive for at least two islet autoantibodies. Discussion Staging for early type 1 diabetes within a public health setting appears to be feasible. The study may set new standards for the early diagnosis of type 1 diabetes and education. Ethics dissemination The study was approved by the ethics committee of Technische Universität München (Nr. 70/14). PMID:27194320
Raab, Jennifer; Haupt, Florian; Scholz, Marlon; Matzke, Claudia; Warncke, Katharina; Lange, Karin; Assfalg, Robin; Weininger, Katharina; Wittich, Susanne; Löbner, Stephanie; Beyerlein, Andreas; Nennstiel-Ratzel, Uta; Lang, Martin; Laub, Otto; Dunstheimer, Desiree; Bonifacio, Ezio; Achenbach, Peter; Winkler, Christiane; Ziegler, Anette-G
2016-05-18
Type 1 diabetes can be diagnosed at an early presymptomatic stage by the detection of islet autoantibodies. The Fr1da study aims to assess whether early staging of type 1 diabetes (1) is feasible at a population-based level, (2) prevents severe metabolic decompensation observed at the clinical manifestation of type 1 diabetes and (3) reduces psychological distress through preventive teaching and care. Children aged 2-5 years in Bavaria, Germany, will be tested for the presence of multiple islet autoantibodies. Between February 2015 and December 2016, 100 000 children will be screened by primary care paediatricians. Islet autoantibodies are measured in capillary blood samples using a multiplex three-screen ELISA. Samples with ELISA results >97.5th centile are retested using reference radiobinding assays. A venous blood sample is also obtained to confirm the autoantibody status of children with at least two autoantibodies. Children with confirmed multiple islet autoantibodies are diagnosed with pre-type 1 diabetes. These children and their parents are invited to participate in an education and counselling programme at a local diabetes centre. Depression and anxiety, and burden of early diagnosis are also assessed. Of the 1027 Bavarian paediatricians, 39.3% are participating in the study. Overall, 26 760 children have been screened between February 2015 and November 2015. Capillary blood collection was sufficient in volume for islet autoantibody detection in 99.46% of the children. The remaining 0.54% had insufficient blood volume collected. Of the 26 760 capillary samples tested, 0.39% were positive for at least two islet autoantibodies. Staging for early type 1 diabetes within a public health setting appears to be feasible. The study may set new standards for the early diagnosis of type 1 diabetes and education. The study was approved by the ethics committee of Technische Universität München (Nr. 70/14). Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Li, Jing; Dharmarajan, Kumar; Li, Xi; Lin, Zhenqiu; Normand, Sharon-Lise T; Krumholz, Harlan M; Jiang, Lixin
2014-03-07
During the past decade, the volume of percutaneous coronary intervention (PCI) in China has risen by more than 20-fold. Yet little is known about patterns of care and outcomes across hospitals, regions and time during this period of rising cardiovascular disease and dynamic change in the Chinese healthcare system. Using the China PEACE (Patient-centered Evaluative Assessment of Cardiac Events) research network, the Retrospective Study of Coronary Catheterisation and Percutaneous Coronary Intervention (China PEACE-Retrospective CathPCI Study) will examine a nationally representative sample of 11 900 patients who underwent coronary catheterisation or PCI at 55 Chinese hospitals during 2001, 2006 and 2011. We selected patients and study sites using a two-stage cluster sampling design with simple random sampling stratified within economical-geographical strata. A central coordinating centre will monitor data quality at the stages of case ascertainment, medical record abstraction and data management. We will examine patient characteristics, diagnostic testing patterns, procedural treatments and in-hospital outcomes, including death, complications of treatment and costs of hospitalisation. We will additionally characterise variation in treatments and outcomes by patient characteristics, hospital, region and study year. The China PEACE collaboration is designed to translate research into improved care for patients. The study protocol was approved by the central ethics committee at the China National Center for Cardiovascular Diseases (NCCD) and collaborating hospitals. Findings will be shared with participating hospitals, policymakers and the academic community to promote quality monitoring, quality improvement and the efficient allocation and use of coronary catheterisation and PCI in China. http://www.clinicaltrials.gov (NCT01624896).
Li, Jing; Dharmarajan, Kumar; Li, Xi; Lin, Zhenqiu; Normand, Sharon-Lise T; Krumholz, Harlan M; Jiang, Lixin
2014-01-01
Introduction During the past decade, the volume of percutaneous coronary intervention (PCI) in China has risen by more than 20-fold. Yet little is known about patterns of care and outcomes across hospitals, regions and time during this period of rising cardiovascular disease and dynamic change in the Chinese healthcare system. Methods and analysis Using the China PEACE (Patient-centered Evaluative Assessment of Cardiac Events) research network, the Retrospective Study of Coronary Catheterisation and Percutaneous Coronary Intervention (China PEACE-Retrospective CathPCI Study) will examine a nationally representative sample of 11 900 patients who underwent coronary catheterisation or PCI at 55 Chinese hospitals during 2001, 2006 and 2011. We selected patients and study sites using a two-stage cluster sampling design with simple random sampling stratified within economical-geographical strata. A central coordinating centre will monitor data quality at the stages of case ascertainment, medical record abstraction and data management. We will examine patient characteristics, diagnostic testing patterns, procedural treatments and in-hospital outcomes, including death, complications of treatment and costs of hospitalisation. We will additionally characterise variation in treatments and outcomes by patient characteristics, hospital, region and study year. Ethics and dissemination The China PEACE collaboration is designed to translate research into improved care for patients. The study protocol was approved by the central ethics committee at the China National Center for Cardiovascular Diseases (NCCD) and collaborating hospitals. Findings will be shared with participating hospitals, policymakers and the academic community to promote quality monitoring, quality improvement and the efficient allocation and use of coronary catheterisation and PCI in China. Registration details http://www.clinicaltrials.gov (NCT01624896). PMID:24607563
ERIC Educational Resources Information Center
Barbera, Elena; Garcia, Iolanda; Fuertes-Alpiste, Marc
2017-01-01
This paper presents a case study of the co-design process for an online course on Sustainable Development (Degree in Tourism) involving the teacher, two students, and the project researchers. The co-design process was founded on an inquiry-based and technology-enhanced model that takes shape in a set of design principles. The research had two main…
Food Allergy Among U.S. Children: Trends in Prevalence and Hospitalizations
... the United States is becoming more common over time. In 2007, the reported food allergy rate among ... excluded. The NHDS uses a three-stage sampling design procedure to produce national estimates of hospital discharges. ...
Efficient two-stage dual-beam noncollinear optical parametric amplifier
NASA Astrophysics Data System (ADS)
Cheng, Yu-Hsiang; Gao, Frank Y.; Poulin, Peter R.; Nelson, Keith A.
2018-06-01
We have constructed a noncollinear optical parametric amplifier with two signal beams amplified in the same nonlinear crystal. This dual-beam design is more energy-efficient than operating two amplifiers in parallel. The cross-talk between two beams has been characterized and discussed. We have also added a second amplification stage to enhance the output of one of the arms, which is then frequency-doubled for ultraviolet generation. This single device provides two tunable sources for ultrafast spectroscopy in the ultraviolet and visible region.
The AIDS Memorial Quilt as preventative education: a developmental analysis of the Quilt.
Knaus, C S; Austin, E W
1999-12-01
This study consisted of a survey given to college students (N = 560) at a rural university in the Pacific Northwest. The sample was randomly assigned into four groups, following the Solomon four-group study design. The two levels of treatment included interventions consisting of a visit to the AIDS Memorial Quilt for the experimental groups and attendance at an unrelated event for the control groups. Pretests were completed 4 weeks prior to interventions; posttests were completed by the entire sample 4 weeks after the interventions. Results confirmed expected differences among the four groups in terms of social distance, perceptions of people with AIDS, self-efficacy, and discussion of risky behavior. The results suggest that the AIDS Memorial Quilt addresses issues centrally related to behavior change and indicates support for the message interpretation process and stages of change models.
Utilizing broadband X-rays in a Bragg coherent X-ray diffraction imaging experiment
Cha, Wonsuk; Liu, Wenjun; Harder, Ross; ...
2016-07-26
A method is presented to simplify Bragg coherent X-ray diffraction imaging studies of complex heterogeneous crystalline materials with a two-stage screening/imaging process that utilizes polychromatic and monochromatic coherent X-rays and is compatible with in situ sample environments. Coherent white-beam diffraction is used to identify an individual crystal particle or grain that displays desired properties within a larger population. A three-dimensional reciprocal-space map suitable for diffraction imaging is then measured for the Bragg peak of interest using a monochromatic beam energy scan that requires no sample motion, thus simplifyingin situchamber design. This approach was demonstrated with Au nanoparticles and will enable,more » for example, individual grains in a polycrystalline material of specific orientation to be selected, then imaged in three dimensions while under load.« less
Utilizing broadband X-rays in a Bragg coherent X-ray diffraction imaging experiment.
Cha, Wonsuk; Liu, Wenjun; Harder, Ross; Xu, Ruqing; Fuoss, Paul H; Hruszkewycz, Stephan O
2016-09-01
A method is presented to simplify Bragg coherent X-ray diffraction imaging studies of complex heterogeneous crystalline materials with a two-stage screening/imaging process that utilizes polychromatic and monochromatic coherent X-rays and is compatible with in situ sample environments. Coherent white-beam diffraction is used to identify an individual crystal particle or grain that displays desired properties within a larger population. A three-dimensional reciprocal-space map suitable for diffraction imaging is then measured for the Bragg peak of interest using a monochromatic beam energy scan that requires no sample motion, thus simplifying in situ chamber design. This approach was demonstrated with Au nanoparticles and will enable, for example, individual grains in a polycrystalline material of specific orientation to be selected, then imaged in three dimensions while under load.
Sample Acqusition Drilling System for the the Resource Prospector Mission
NASA Astrophysics Data System (ADS)
Zacny, K.; Paulsen, G.; Quinn, J.; Smith, J.; Kleinhenz, J.
2015-12-01
The goal of the Lunar Resource Prospector Mission (RPM) is to capture and identify volatiles species within the top meter of the lunar regolith. The RPM drill has been designed to 1. Generate cuttings and place them on the surface for analysis by the the Near InfraRed Volatiles Spectrometer Subsystem (NIRVSS), and 2. Capture cuttings and transfer them to the Oxygen and Volatile Extraction Node (OVEN) coupled with the Lunar Advanced Volatiles Analysis (LAVA) subsystem. The RPM drill is based on the Mars Icebreaker drill developed for capturing samples of ice and ice cemented ground on Mars. The drill weighs approximately 10 kg and is rated at ~300 Watt. It is a rotary-percussive, fully autonomous system designed to capture cuttings for analysis. The drill consists of: 1. Rotary-Percussive Drill Head, 2. Sampling Auger, 3. Brushing station, 4. Z-stage, 5. Deployment stage. To reduce sample handling complexity, the drill auger is designed to capture cuttings as opposed to cores. High sampling efficiency is possible through a dual design of the auger. The lower section has deep and low pitch flutes for retaining of cuttings. The upper section has been designed to efficiently move the cuttings out of the hole. The drill uses a "bite" sampling approach where samples are captured in ~10 cm intervals. The first generation drill was tested in Mars chamber as well as in Antarctica and the Arctic. It demonstrated drilling at 1-1-100-100 level (1 meter in 1 hour with 100 Watt and 100 N Weight on Bit) in ice, ice cemented ground, soil, and rocks. The second generation drill was deployed on a Carnegie Mellon University rover, called Zoe, and tested in Atacama in 2012. The tests demonstrated fully autonomous sample acquisition and delivery to a carousel. The third generation drill was tested in NASA GRC's vacuum chamber, VF13, at 10-5 torr and approximately 200 K. It demonstrated successful capture and transfer of icy samples to a crucible. The drill has been modified and integrated onto the NASA JSC RPM rover. It has been undergoing testing in a lab and in the field during the Summer of 2015.
NASA Astrophysics Data System (ADS)
Moler, Perry J.
The purpose of this study was to understand what perceptions junior and senior engineering & technology students have about change, change readiness, and selected attributes, skills, and abilities. The selected attributes, skills, and abilities for this study were lifelong learning, leadership, and self-efficacy. The business environment of today is dynamic, with any number of internal and external events requiring an organization to adapt through the process of organizational development. Organizational developments affect businesses as a whole, but these developments are more evident in fields related to engineering and technology. Which require employees working through such developments be flexible and adaptable to a new professional environment. This study was an Explanatory Sequential Mixed Methods design, with Stage One being an online survey that collected individuals' perceptions of change, change readiness, and associated attributes, skills, and abilities. Stage Two was a face-to-face interview with a random sample of individuals who agreed to be interviewed in Stage One. This process was done to understand why students' perceptions are what they are. By using a mixed-method study, a more complete understanding of the current perceptions of students was developed, thus allowing external stakeholders' such as Human Resource managers more insight into the individuals they seek to recruit. The results from Stage One, one sample T-test with a predicted mean of 3.000 for this study indicated that engineering & technology students have a positive perceptions of Change Mean = 3.7024; Change Readiness Mean = 3.9313; Lifelong Learning Mean = 4.571; Leadership = 4.036; and Self-Efficacy Mean = 4.321. A One-way ANOVA was also conducted to understand the differences between traditional and non-traditional student regarding change and change readiness. The results of the ANOVA test indicated there were no significant differences between these two groups. The results from Stage Two showed that students perceived change as both positive and negative. This perception stems from their life experiences rather than from educational or professional experiences. The same can be said for the concepts of change readiness, lifelong learning, leadership, and self-efficacy. This indicates that engineering & technology programs should implement these concepts into their curriculum to better prepare engineering & technology students to enter into professional careers.