An adaptive two-stage sequential design for sampling rare and clustered populations
Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.
2008-01-01
How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.
On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.
ERIC Educational Resources Information Center
Wilcox, Rand R.
1984-01-01
Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)
A sequential bioequivalence design with a potential ethical advantage.
Fuglsang, Anders
2014-07-01
This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.
Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H
2015-11-30
We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.
Galway, Lp; Bell, Nathaniel; Sae, Al Shatari; Hagopian, Amy; Burnham, Gilbert; Flaxman, Abraham; Weiss, Wiliam M; Rajaratnam, Julie; Takaro, Tim K
2012-04-27
Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings.
2012-01-01
Background Mortality estimates can measure and monitor the impacts of conflict on a population, guide humanitarian efforts, and help to better understand the public health impacts of conflict. Vital statistics registration and surveillance systems are rarely functional in conflict settings, posing a challenge of estimating mortality using retrospective population-based surveys. Results We present a two-stage cluster sampling method for application in population-based mortality surveys. The sampling method utilizes gridded population data and a geographic information system (GIS) to select clusters in the first sampling stage and Google Earth TM imagery and sampling grids to select households in the second sampling stage. The sampling method is implemented in a household mortality study in Iraq in 2011. Factors affecting feasibility and methodological quality are described. Conclusion Sampling is a challenge in retrospective population-based mortality studies and alternatives that improve on the conventional approaches are needed. The sampling strategy presented here was designed to generate a representative sample of the Iraqi population while reducing the potential for bias and considering the context specific challenges of the study setting. This sampling strategy, or variations on it, are adaptable and should be considered and tested in other conflict settings. PMID:22540266
A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market
Hu, Zhineng; Lu, Wei; Han, Bing
2015-01-01
This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847
SEMIPARAMETRIC ADDITIVE RISKS REGRESSION FOR TWO-STAGE DESIGN SURVIVAL STUDIES
Li, Gang; Wu, Tong Tong
2011-01-01
In this article we study a semiparametric additive risks model (McKeague and Sasieni (1994)) for two-stage design survival data where accurate information is available only on second stage subjects, a subset of the first stage study. We derive two-stage estimators by combining data from both stages. Large sample inferences are developed. As a by-product, we also obtain asymptotic properties of the single stage estimators of McKeague and Sasieni (1994) when the semiparametric additive risks model is misspecified. The proposed two-stage estimators are shown to be asymptotically more efficient than the second stage estimators. They also demonstrate smaller bias and variance for finite samples. The developed methods are illustrated using small intestine cancer data from the SEER (Surveillance, Epidemiology, and End Results) Program. PMID:21931467
Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis; Hansen, Per Christian
2016-04-01
Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express21, 12185 (2013)OPEXFF1094-408710.1364/OE.21.012185], and preliminary results demonstrated improved reconstruction compared with a given two-stage method. Using simulated free-space propagation experiments with a single sample-detector distance, we thoroughly compare the novel method with the two-stage method to address limitations of the preliminary results. We demonstrate that the novel method is substantially more robust toward noise; our simulations point to a possible reduction in counting times by an order of magnitude.
NASA Astrophysics Data System (ADS)
Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi
2017-02-01
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.
Sampling and estimating recreational use.
Timothy G. Gregoire; Gregory J. Buhyoff
1999-01-01
Probability sampling methods applicable to estimate recreational use are presented. Both single- and multiple-access recreation sites are considered. One- and two-stage sampling methods are presented. Estimation of recreational use is presented in a series of examples.
Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.
Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping
2015-06-07
Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.
Yoon, Dukyong; Kim, Hyosil; Suh-Kim, Haeyoung; Park, Rae Woong; Lee, KiYoung
2011-01-01
Microarray analyses based on differentially expressed genes (DEGs) have been widely used to distinguish samples across different cellular conditions. However, studies based on DEGs have not been able to clearly determine significant differences between samples of pathophysiologically similar HIV-1 stages, e.g., between acute and chronic progressive (or AIDS) or between uninfected and clinically latent stages. We here suggest a novel approach to allow such discrimination based on stage-specific genetic features of HIV-1 infection. Our approach is based on co-expression changes of genes known to interact. The method can identify a genetic signature for a single sample as contrasted with existing protein-protein-based analyses with correlational designs. Our approach distinguishes each sample using differentially co-expressed interacting protein pairs (DEPs) based on co-expression scores of individual interacting pairs within a sample. The co-expression score has positive value if two genes in a sample are simultaneously up-regulated or down-regulated. And the score has higher absolute value if expression-changing ratios are similar between the two genes. We compared characteristics of DEPs with that of DEGs by evaluating their usefulness in separation of HIV-1 stage. And we identified DEP-based network-modules and their gene-ontology enrichment to find out the HIV-1 stage-specific gene signature. Based on the DEP approach, we observed clear separation among samples from distinct HIV-1 stages using clustering and principal component analyses. Moreover, the discrimination power of DEPs on the samples (70-100% accuracy) was much higher than that of DEGs (35-45%) using several well-known classifiers. DEP-based network analysis also revealed the HIV-1 stage-specific network modules; the main biological processes were related to "translation," "RNA splicing," "mRNA, RNA, and nucleic acid transport," and "DNA metabolism." Through the HIV-1 stage-related modules, changing stage-specific patterns of protein interactions could be observed. DEP-based method discriminated the HIV-1 infection stages clearly, and revealed a HIV-1 stage-specific gene signature. The proposed DEP-based method might complement existing DEG-based approaches in various microarray expression analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, Qinzhuo, E-mail: liaoqz@pku.edu.cn; Zhang, Dongxiao; Tchelepi, Hamdi
A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod–Patterson–Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiencymore » of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.« less
NASA Astrophysics Data System (ADS)
Amiroh; Priaminiarti, M.; Syahraini, S. I.
2017-08-01
Age estimation of individuals, both dead and living, is important for victim identification and legal certainty. The Demirjian method uses the third molar for age estimation of individuals above 15 years old. The aim is to compare age estimation between 15-25 years using two Demirjian methods. Development stage of third molars in panoramic radiographs of 50 male and female samples were assessed by two observers using Demirjian’s ten stages and two teeth regression formula. Reliability was calculated using Cohen’s kappa coefficient and the significance of the observations was obtained from Wilcoxon tests. Deviations of age estimation were calculated using various methods. The deviation of age estimation with the two teeth regression formula was ±1.090 years; with ten stages, it was ±1.191 years. The deviation of age estimation using the two teeth regression formula was less than with the ten stages method. The age estimations using the two teeth regression formula or the ten stages method are significantly different until the age of 25, but they can be applied up to the age of 22.
Miller, Ezer; Huppert, Amit; Novikov, Ilya; Warburg, Alon; Hailu, Asrat; Abbasi, Ibrahim; Freedman, Laurence S
2015-11-10
In this work, we describe a two-stage sampling design to estimate the infection prevalence in a population. In the first stage, an imperfect diagnostic test was performed on a random sample of the population. In the second stage, a different imperfect test was performed in a stratified random sample of the first sample. To estimate infection prevalence, we assumed conditional independence between the diagnostic tests and develop method of moments estimators based on expectations of the proportions of people with positive and negative results on both tests that are functions of the tests' sensitivity, specificity, and the infection prevalence. A closed-form solution of the estimating equations was obtained assuming a specificity of 100% for both tests. We applied our method to estimate the infection prevalence of visceral leishmaniasis according to two quantitative polymerase chain reaction tests performed on blood samples taken from 4756 patients in northern Ethiopia. The sensitivities of the tests were also estimated, as well as the standard errors of all estimates, using a parametric bootstrap. We also examined the impact of departures from our assumptions of 100% specificity and conditional independence on the estimated prevalence. Copyright © 2015 John Wiley & Sons, Ltd.
High pressure studies using two-stage diamond micro-anvils grown by chemical vapor deposition
Vohra, Yogesh K.; Samudrala, Gopi K.; Moore, Samuel L.; ...
2015-06-10
Ultra-high static pressures have been achieved in the laboratory using a two-stage micro-ball nanodiamond anvils as well as a two-stage micro-paired diamond anvils machined using a focused ion-beam system. The two-stage diamond anvils’ designs implemented thus far suffer from a limitation of one diamond anvil sliding past another anvil at extreme conditions. We describe a new method of fabricating two-stage diamond micro-anvils using a tungsten mask on a standard diamond anvil followed by microwave plasma chemical vapor deposition (CVD) homoepitaxial diamond growth. A prototype two stage diamond anvil with 300 μm culet and with a CVD diamond second stage ofmore » 50 μm in diameter was fabricated. We have carried out preliminary high pressure X-ray diffraction studies on a sample of rare-earth metal lutetium sample with a copper pressure standard to 86 GPa. Furthermore, the micro-anvil grown by CVD remained intact during indentation of gasket as well as on decompression from the highest pressure of 86 GPa.« less
Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E
2016-06-01
Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.
Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161
Multistage point relascope and randomized branch sampling for downed coarse woody debris estimation
Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine
2002-01-01
New sampling methods have recently been introduced that allow estimation of downed coarse woody debris using an angle gauge, or relascope. The theory behind these methods is based on sampling straight pieces of downed coarse woody debris. When pieces deviate from this ideal situation, auxillary methods must be employed. We describe a two-stage procedure where the...
USDA-ARS?s Scientific Manuscript database
Two sampling techniques, agar extraction (AE) and centrifuge sugar flotation extraction (CSFE) were compared to determine their relative efficacy to recover immature stages of Culicoides spp from salt marsh substrates. Three types of samples (seeded with known numbers of larvae, homogenized field s...
A new method to obtain Fourier transform infrared spectra free from water vapor disturbance.
Chen, Yujing; Wang, Hai-Shui; Umemura, Junzo
2010-10-01
Infrared absorption bands due to water vapor in the mid-infrared regions often obscure important spectral features of the sample. Here, we provide a novel method to collect a qualified infrared spectrum without any water vapor interference. The scanning procedure for a single-beam spectrum of the sample is divided into two stages under an atmosphere with fluctuating humidity. In the first stage, the sample spectrum is measured with approximately the same number of scans as the background. If the absorbance of water vapor in the spectrum is positive (or negative) at the end of the first stage, then the relative humidity in the sample compartment of the spectrometer is changed by a dry (or wet) air blow at the start of the second stage while the measurement of the sample spectrum continues. After the relative humidity changes to a lower (or higher) level than that of the previously collected background spectrum, water vapor peaks will become smaller and smaller with the increase in scanning number during the second stage. When the interfering water lines disappear from the spectrum, the acquisition of a sample spectrum is terminated. In this way, water vapor interference can finally be removed completely.
Late-stage galaxy mergers in cosmos to z ∼ 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lackner, C. N.; Silverman, J. D.; Salvato, M.
2014-12-01
The role of major mergers in galaxy and black hole formation is not well-constrained. To help address this, we develop an automated method to identify late-stage galaxy mergers before coalescence of the galactic cores. The resulting sample of mergers is distinct from those obtained using pair-finding and morphological indicators. Our method relies on median-filtering of high-resolution images to distinguish two concentrated galaxy nuclei at small separations. This method does not rely on low surface brightness features to identify mergers, and is therefore reliable to high redshift. Using mock images, we derive statistical contamination and incompleteness corrections for the fraction ofmore » late-stage mergers. The mock images show that our method returns an uncontaminated (<10%) sample of mergers with projected separations between 2.2 and 8 kpc out to z∼1. We apply our new method to a magnitude-limited (m{sub FW} {sub 814}<23) sample of 44,164 galaxies from the COSMOS HST/ACS catalog. Using a mass-complete sample with logM{sub ∗}/M{sub ⊙}>10.6 and 0.25« less
Babu, Arun; Reisig, Dominic D
2018-05-29
Brown stink bug, Euschistus servus (Say) (Hemiptera: Pentatomidae), has emerged as a significant pest of corn, Zea mays L., in the southeastern United States. A 2-year study was conducted to quantify the within-plant vertical distribution of adult E. servus in field corn, to examine potential plant phenological characteristics associated with their observed distribution, and to select an efficient partial plant sampling method for adult E. servus population estimation. Within-plant distribution of adult E. servus was influenced by corn phenology. On V4- and V6-stage corn, most of the individuals were found at the base of the plant. Mean relative vertical position of adult E. servus population in corn plants trended upward between the V6 and V14 growth stages. During the reproductive corn growth stages (R1, R2, and R4), a majority of the adult E. servus were concentrated around developing ears. Based on the multiple selection criteria, during V4-V6 corn growth stages, either the corn stalk below the lowest green leaf or basal stratum method could employ for efficient E. servus sampling. Similarly, on reproductive corn growth stages (R1-R4), the plant parts between two leaves above and three leaves below the primary ear leaf were found to be areas to provide the most precise and cost-efficient sampling method. The results from our study successfully demonstrate that in the early vegetative and reproductive stages of corn, scouts can replace the current labor-intensive whole-plant search method with a more efficient, specific partial plant sampling method for E. servus population estimation.
Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J; Murcray, Cassandra Elizabeth; Conti, David
2011-12-01
Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. © 2011 Wiley Periodicals, Inc.
Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J.; Murcray, Cassandra Elizabeth; Conti, David
2014-01-01
Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. PMID:21922541
Lin, Wei; Feng, Rui; Li, Hongzhe
2014-01-01
In genetical genomics studies, it is important to jointly analyze gene expression data and genetic variants in exploring their associations with complex traits, where the dimensionality of gene expressions and genetic variants can both be much larger than the sample size. Motivated by such modern applications, we consider the problem of variable selection and estimation in high-dimensional sparse instrumental variables models. To overcome the difficulty of high dimensionality and unknown optimal instruments, we propose a two-stage regularization framework for identifying and estimating important covariate effects while selecting and estimating optimal instruments. The methodology extends the classical two-stage least squares estimator to high dimensions by exploiting sparsity using sparsity-inducing penalty functions in both stages. The resulting procedure is efficiently implemented by coordinate descent optimization. For the representative L1 regularization and a class of concave regularization methods, we establish estimation, prediction, and model selection properties of the two-stage regularized estimators in the high-dimensional setting where the dimensionality of co-variates and instruments are both allowed to grow exponentially with the sample size. The practical performance of the proposed method is evaluated by simulation studies and its usefulness is illustrated by an analysis of mouse obesity data. Supplementary materials for this article are available online. PMID:26392642
Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.
Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong
2017-01-01
Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.
Sample Size Methods for Estimating HIV Incidence from Cross-Sectional Surveys
Brookmeyer, Ron
2015-01-01
Summary Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this paper we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this paper at the Biometrics website on Wiley Online Library. PMID:26302040
Sample size methods for estimating HIV incidence from cross-sectional surveys.
Konikoff, Jacob; Brookmeyer, Ron
2015-12-01
Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.
Number of pins in two-stage stratified sampling for estimating herbage yield
William G. O' Regan; C. Eugene Conrad
1975-01-01
In a two-stage stratified procedure for sampling herbage yield, plots are stratified by a pin frame in stage one, and clipped. In stage two, clippings from selected plots are sorted, dried, and weighed. Sample size and distribution of plots between the two stages are determined by equations. A way to compute the effect of number of pins on the variance of estimated...
2014-01-01
Background Cancer detection using sniffer dogs is a potential technology for clinical use and research. Our study sought to determine whether dogs could be trained to discriminate the odour of urine from men with prostate cancer from controls, using rigorous testing procedures and well-defined samples from a major research hospital. Methods We attempted to train ten dogs by initially rewarding them for finding and indicating individual prostate cancer urine samples (Stage 1). If dogs were successful in Stage 1, we then attempted to train them to discriminate prostate cancer samples from controls (Stage 2). The number of samples used to train each dog varied depending on their individual progress. Overall, 50 unique prostate cancer and 67 controls were collected and used during training. Dogs that passed Stage 2 were tested for their ability to discriminate 15 (Test 1) or 16 (Tests 2 and 3) unfamiliar prostate cancer samples from 45 (Test 1) or 48 (Tests 2 and 3) unfamiliar controls under double-blind conditions. Results Three dogs reached training Stage 2 and two of these learnt to discriminate potentially familiar prostate cancer samples from controls. However, during double-blind tests using new samples the two dogs did not indicate prostate cancer samples more frequently than expected by chance (Dog A sensitivity 0.13, specificity 0.71, Dog B sensitivity 0.25, specificity 0.75). The other dogs did not progress past Stage 1 as they did not have optimal temperaments for the sensitive odour discrimination training. Conclusions Although two dogs appeared to have learnt to select prostate cancer samples during training, they did not generalise on a prostate cancer odour during robust double-blind tests involving new samples. Our study illustrates that these rigorous tests are vital to avoid drawing misleading conclusions about the abilities of dogs to indicate certain odours. Dogs may memorise the individual odours of large numbers of training samples rather than generalise on a common odour. The results do not exclude the possibility that dogs could be trained to detect prostate cancer. We recommend that canine olfactory memory is carefully considered in all future studies and rigorous double-blind methods used to avoid confounding effects. PMID:24575737
A two-stage design for multiple testing in large-scale association studies.
Wen, Shu-Hui; Tzeng, Jung-Ying; Kao, Jau-Tsuen; Hsiao, Chuhsing Kate
2006-01-01
Modern association studies often involve a large number of markers and hence may encounter the problem of testing multiple hypotheses. Traditional procedures are usually over-conservative and with low power to detect mild genetic effects. From the design perspective, we propose a two-stage selection procedure to address this concern. Our main principle is to reduce the total number of tests by removing clearly unassociated markers in the first-stage test. Next, conditional on the findings of the first stage, which uses a less stringent nominal level, a more conservative test is conducted in the second stage using the augmented data and the data from the first stage. Previous studies have suggested using independent samples to avoid inflated errors. However, we found that, after accounting for the dependence between these two samples, the true discovery rate increases substantially. In addition, the cost of genotyping can be greatly reduced via this approach. Results from a study of hypertriglyceridemia and simulations suggest the two-stage method has a higher overall true positive rate (TPR) with a controlled overall false positive rate (FPR) when compared with single-stage approaches. We also report the analytical form of its overall FPR, which may be useful in guiding study design to achieve a high TPR while retaining the desired FPR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colby, Robert J.; Alsem, Daan H.; Liyu, Andrey V.
2015-06-01
The development of environmental transmission electron microscopy (TEM) has enabled in situ experiments in a gaseous environment with high resolution imaging and spectroscopy. Addressing scientific challenges in areas such as catalysis, corrosion, and geochemistry can require pressures much higher than the ~20 mbar achievable with a differentially pumped, dedicated environmental TEM. Gas flow stages, in which the environment is contained between two semi-transparent thin membrane windows, have been demonstrated at pressures of several atmospheres. While this constitutes significant progress towards operando measurements, the design of many current gas flow stages is such that the pressure at the sample cannot necessarilymore » be directly inferred from the pressure differential across the system. Small differences in the setup and design of the gas flow stage can lead to very different sample pressures. We demonstrate a method for measuring the gas pressure directly, using a combination of electron energy loss spectroscopy and TEM imaging. This method requires only two energy filtered TEM images, limiting the measurement time to a few seconds and can be performed during an ongoing experiment at the region of interest. This approach provides a means to ensure reproducibility between different experiments, and even between very differently designed gas flow stages.« less
Confidence Preserving Machine for Facial Action Unit Detection
Zeng, Jiabei; Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Xiong, Zhang
2016-01-01
Facial action unit (AU) detection from video has been a long-standing problem in automated facial expression analysis. While progress has been made, accurate detection of facial AUs remains challenging due to ubiquitous sources of errors, such as inter-personal variability, pose, and low-intensity AUs. In this paper, we refer to samples causing such errors as hard samples, and the remaining as easy samples. To address learning with the hard samples, we propose the Confidence Preserving Machine (CPM), a novel two-stage learning framework that combines multiple classifiers following an “easy-to-hard” strategy. During the training stage, CPM learns two confident classifiers. Each classifier focuses on separating easy samples of one class from all else, and thus preserves confidence on predicting each class. During the testing stage, the confident classifiers provide “virtual labels” for easy test samples. Given the virtual labels, we propose a quasi-semi-supervised (QSS) learning strategy to learn a person-specific (PS) classifier. The QSS strategy employs a spatio-temporal smoothness that encourages similar predictions for samples within a spatio-temporal neighborhood. In addition, to further improve detection performance, we introduce two CPM extensions: iCPM that iteratively augments training samples to train the confident classifiers, and kCPM that kernelizes the original CPM model to promote nonlinearity. Experiments on four spontaneous datasets GFT [15], BP4D [56], DISFA [42], and RU-FACS [3] illustrate the benefits of the proposed CPM models over baseline methods and state-of-the-art semisupervised learning and transfer learning methods. PMID:27479964
Krecek, R C; Maingi, N
2004-07-14
A laboratory trial to determine the efficacy of two methods in recovering known numbers of third-stage (L3) strongylid nematode larvae from herbage was carried out. Herbage samples consisting almost entirely of star grass (Cynodon aethiopicus) that had no L3 nematode parasitic larvae were collected at Onderstepoort, South Africa. Two hundred grams samples were placed in fibreglass fly gauze bags and seeded with third-stage strongylid nematode larvae at 11 different levels of herbage infectivity ranging from 50 to 8000 L3/kg. Eight replicates were prepared for each of the 11 levels of herbage infectivity. Four of these were processed using a modified automatic Speed Queen heavy-duty washing machine at a regular normal cycle, followed by isolation of larvae through centrifugation-flotation in saturated sugar solution. Larvae in the other four samples were recovered after soaking the herbage in water overnight and the larvae isolated with the Baermann technique of the washing. There was a strong correlation between the number of larvae recovered using both methods and the number of larvae in the seeded samples, indicating that the two methods give a good indication of changes in the numbers of larvae on pasture if applied in epidemiological studies. The washing machine method recovered higher numbers of larvae than the soaking and Baermann method at all levels of pasture seeding, probably because the machine washed the samples more thoroughly and a sugar centrifugation-flotation step was used. Larval suspensions obtained using the washing machine method were therefore cleaner and thus easier to examine under the microscope. In contrast, the soaking and Baermann method may be more suitable in field-work, especially in places where resources and equipment are scarce, as it is less costly in equipment and less labour intensive. Neither method recovered all the larvae from the seeded samples. The recovery rates for the washing machine method ranged from 18 to 41% while those for the soaking and Baermann method ranged from 0 to 27%. Practical application of the two methods to estimate the number of nematode larvae on pastures without applying a correction factor would therefore result in a significant underestimation. This study provides a model, which can be applied in various laboratories to determine the larval recovery rates for techniques being used and the application of a correction factor when estimating the actual numbers of larvae on pasture.
Groskreutz, Stephen R.; Weber, Stephen G.
2016-01-01
In this work we characterize the development of a method to enhance temperature-assisted on-column solute focusing (TASF) called two-stage TASF. A new instrument was built to implement two-stage TASF consisting of a linear array of three independent, electronically controlled Peltier devices (thermoelectric coolers, TECs). Samples are loaded onto the chromatographic column with the first two TECs, TEC A and TEC B, cold. In the two-stage TASF approach TECs A and B are cooled during injection. TEC A is heated following sample loading. At some time following TEC A’s temperature rise, TEC B’s temperature is increased from the focusing temperature to a temperature matching that of TEC A. Injection bands are focused twice on-column, first on the initial TEC, e.g. single-stage TASF, then refocused on the second, cold TEC. Our goal is to understand the two-stage TASF approach in detail. We have developed a simple yet powerful digital simulation procedure to model the effect of changing temperature in the two focusing zones on retention, band shape and band spreading. The simulation can predict experimental chromatograms resulting from spatial and temporal temperature programs in combination with isocratic and solvent gradient elution. To assess the two-stage TASF method and the accuracy of the simulation well characterized solutes are needed. Thus, retention factors were measured at six temperatures (25–75 °C) at each of twelve mobile phases compositions (0.05–0.60 acetonitrile/water) for homologs of n-alkyl hydroxylbenzoate esters and n-alkyl p-hydroxyphenones. Simulations accurately reflect experimental results in showing that the two-stage approach improves separation quality. For example, two-stage TASF increased sensitivity for a low retention solute by a factor of 2.2 relative to single-stage TASF and 8.8 relative to isothermal conditions using isocratic elution. Gradient elution results for two-stage TASF were more encouraging. Application of two-stage TASF increased peak height for the least retained solute in the test mixture by a factor of 3.2 relative to single-stage TASF and 22.3 compared to isothermal conditions for an injection four-times the column volume. TASF improved resolution and increased peak capacity; for a 12-minute separation peak capacity increased from 75 under isothermal conditions to 146 using single-stage TASF, and 185 for two-stage TASF. PMID:27836226
Groskreutz, Stephen R; Weber, Stephen G
2016-11-25
In this work we characterize the development of a method to enhance temperature-assisted on-column solute focusing (TASF) called two-stage TASF. A new instrument was built to implement two-stage TASF consisting of a linear array of three independent, electronically controlled Peltier devices (thermoelectric coolers, TECs). Samples are loaded onto the chromatographic column with the first two TECs, TEC A and TEC B, cold. In the two-stage TASF approach TECs A and B are cooled during injection. TEC A is heated following sample loading. At some time following TEC A's temperature rise, TEC B's temperature is increased from the focusing temperature to a temperature matching that of TEC A. Injection bands are focused twice on-column, first on the initial TEC, e.g. single-stage TASF, then refocused on the second, cold TEC. Our goal is to understand the two-stage TASF approach in detail. We have developed a simple yet powerful digital simulation procedure to model the effect of changing temperature in the two focusing zones on retention, band shape and band spreading. The simulation can predict experimental chromatograms resulting from spatial and temporal temperature programs in combination with isocratic and solvent gradient elution. To assess the two-stage TASF method and the accuracy of the simulation well characterized solutes are needed. Thus, retention factors were measured at six temperatures (25-75°C) at each of twelve mobile phases compositions (0.05-0.60 acetonitrile/water) for homologs of n-alkyl hydroxylbenzoate esters and n-alkyl p-hydroxyphenones. Simulations accurately reflect experimental results in showing that the two-stage approach improves separation quality. For example, two-stage TASF increased sensitivity for a low retention solute by a factor of 2.2 relative to single-stage TASF and 8.8 relative to isothermal conditions using isocratic elution. Gradient elution results for two-stage TASF were more encouraging. Application of two-stage TASF increased peak height for the least retained solute in the test mixture by a factor of 3.2 relative to single-stage TASF and 22.3 compared to isothermal conditions for an injection four-times the column volume. TASF improved resolution and increased peak capacity; for a 12-min separation peak capacity increased from 75 under isothermal conditions to 146 using single-stage TASF, and 185 for two-stage TASF. Copyright © 2016 Elsevier B.V. All rights reserved.
Femtosecond Chirp-Free Transient Absorption Method And Apparatus
McBranch, Duncan W.; Klimov, Victor I.
2001-02-20
A method and apparatus for femtosecond transient absorption comprising phase-sensitive detection, spectral scanning and simultaneous controlling of a translation stage to obtain TA spectra information having at least a sensitivity two orders of magnitude higher than that for single-shot methods, with direct, simultaneous compensation for chirp as the data is acquired. The present invention includes a amplified delay translation stage which generates a splittable frequency-doubled laser signal at a predetermined frequency f, a controllable means for synchronously modulating one of the laser signals at a repetition rate of f/2, applying the laser signals to a material to be sample, and acquiring data from the excited sample while simultaneously controlling the controllable means for synchronously modulating.
Hyun, Noorie; Gastwirth, Joseph L; Graubard, Barry I
2018-03-26
Originally, 2-stage group testing was developed for efficiently screening individuals for a disease. In response to the HIV/AIDS epidemic, 1-stage group testing was adopted for estimating prevalences of a single or multiple traits from testing groups of size q, so individuals were not tested. This paper extends the methodology of 1-stage group testing to surveys with sample weighted complex multistage-cluster designs. Sample weighted-generalized estimating equations are used to estimate the prevalences of categorical traits while accounting for the error rates inherent in the tests. Two difficulties arise when using group testing in complex samples: (1) How does one weight the results of the test on each group as the sample weights will differ among observations in the same group. Furthermore, if the sample weights are related to positivity of the diagnostic test, then group-level weighting is needed to reduce bias in the prevalence estimation; (2) How does one form groups that will allow accurate estimation of the standard errors of prevalence estimates under multistage-cluster sampling allowing for intracluster correlation of the test results. We study 5 different grouping methods to address the weighting and cluster sampling aspects of complex designed samples. Finite sample properties of the estimators of prevalences, variances, and confidence interval coverage for these grouping methods are studied using simulations. National Health and Nutrition Examination Survey data are used to illustrate the methods. Copyright © 2018 John Wiley & Sons, Ltd.
Bowyer, A E; Hillarp, A; Ezban, M; Persson, P; Kitchen, S
2016-07-01
Essentials Validated assays are required to precisely measure factor IX (FIX) activity in FIX products. N9-GP and two other FIX products were assessed in various coagulation assay systems at two sites. Large variations in FIX activity measurements were observed for N9-GP using some assays. One-stage and chromogenic assays accurately measuring FIX activity for N9-GP were identified. Background Measurement of factor IX activity (FIX:C) with activated partial thromboplastin time-based one-stage clotting assays is associated with a large degree of interlaboratory variation in samples containing glycoPEGylated recombinant FIX (rFIX), i.e. nonacog beta pegol (N9-GP). Validation and qualification of specific assays and conditions are necessary for the accurate assessment of FIX:C in samples containing N9-GP. Objectives To assess the accuracy of various one-stage clotting and chromogenic assays for measuring FIX:C in samples containing N9-GP as compared with samples containing rFIX or plasma-derived FIX (pdFIX) across two laboratory sites. Methods FIX:C, in severe hemophilia B plasma spiked with a range of concentrations (from very low, i.e. 0.03 IU mL(-1) , to high, i.e. 0.90 IU mL(-1) ) of N9-GP, rFIX (BeneFIX), and pdFIX (Mononine), was determined at two laboratory sites with 10 commercially available one-stage clotting assays and two chromogenic FIX:C assays. Assays were performed with a plasma calibrator and different analyzers. Results A high degree of variation in FIX:C measurement was observed for one-stage clotting assays for N9-GP as compared with rFIX or pdFIX. Acceptable N9-GP recovery was observed in the low-concentration to high-concentration samples tested with one-stage clotting assays using SynthAFax or DG Synth, or with chromogenic FIX:C assays. Similar patterns of FIX:C measurement were observed at both laboratory sites, with minor differences probably being attributable to the use of different analyzers. Conclusions These results suggest that, of the reagents tested, FIX:C in N9-GP-containing plasma samples can be most accurately measured with one-stage clotting assays using SynthAFax or DG Synth, or with chromogenic FIX:C assays. © 2016 International Society on Thrombosis and Haemostasis.
NASA Technical Reports Server (NTRS)
Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.
1996-01-01
We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.
NASA Astrophysics Data System (ADS)
Szlązak, Nikodem; Korzec, Marek
2016-06-01
Methane has a bad influence on safety in underground mines as it is emitted to the air during mining works. Appropriate identification of methane hazard is essential to determining methane hazard prevention methods, ventilation systems and methane drainage systems. Methane hazard is identified while roadways are driven and boreholes are drilled. Coalbed methane content is one of the parameters which is used to assess this threat. This is a requirement according to the Decree of the Minister of Economy dated 28 June 2002 on work safety and hygiene, operation and special firefighting protection in underground mines. For this purpose a new method for determining coalbed methane content in underground coal mines has been developed. This method consists of two stages - collecting samples in a mine and testing the sample in the laboratory. The stage of determining methane content in a coal sample in a laboratory is essential. This article presents the estimation of measurement uncertainty of determining methane content in a coal sample according to this methodology.
Espino, L; Way, M O; Wilson, L T
2008-02-01
Commercial rice, Oryza sativa L., fields in southeastern Texas were sampled during 2003 and 2004, and visual samples were compared with sweep net samples. Fields were sampled at different stages of panicle development, times of day, and by different operators. Significant differences were found between perimeter and within field sweep net samples, indicating that samples taken 9 m from the field margin overestimate within field Oebalus pugnax (F.) (Hemiptera: Pentatomidae) populations. Time of day did not significantly affect the number of O. pugnax caught with the sweep net; however, there was a trend to capture more insects during morning than afternoon. For all sampling methods evaluated during this study, O. pugnax was found to have an aggregated spatial pattern at most densities. When comparing sweep net with visual sampling methods, one sweep of the "long stick" and two sweeps of the "sweep stick" correlated well with the sweep net (r2 = 0.639 and r2 = 0.815, respectively). This relationship was not affected by time of day of sampling, stage of panicle development, type of planting or operator. Relative cost-reliability, which incorporates probability of adoption, indicates the visual methods are more cost-reliable than the sweep net for sampling O.
The impact of sample non-normality on ANOVA and alternative methods.
Lantz, Björn
2013-05-01
In this journal, Zimmerman (2004, 2011) has discussed preliminary tests that researchers often use to choose an appropriate method for comparing locations when the assumption of normality is doubtful. The conceptual problem with this approach is that such a two-stage process makes both the power and the significance of the entire procedure uncertain, as type I and type II errors are possible at both stages. A type I error at the first stage, for example, will obviously increase the probability of a type II error at the second stage. Based on the idea of Schmider et al. (2010), which proposes that simulated sets of sample data be ranked with respect to their degree of normality, this paper investigates the relationship between population non-normality and sample non-normality with respect to the performance of the ANOVA, Brown-Forsythe test, Welch test, and Kruskal-Wallis test when used with different distributions, sample sizes, and effect sizes. The overall conclusion is that the Kruskal-Wallis test is considerably less sensitive to the degree of sample normality when populations are distinctly non-normal and should therefore be the primary tool used to compare locations when it is known that populations are not at least approximately normal. © 2012 The British Psychological Society.
Chen, Zhijian; Craiu, Radu V; Bull, Shelley B
2014-11-01
In focused studies designed to follow up associations detected in a genome-wide association study (GWAS), investigators can proceed to fine-map a genomic region by targeted sequencing or dense genotyping of all variants in the region, aiming to identify a functional sequence variant. For the analysis of a quantitative trait, we consider a Bayesian approach to fine-mapping study design that incorporates stratification according to a promising GWAS tag SNP in the same region. Improved cost-efficiency can be achieved when the fine-mapping phase incorporates a two-stage design, with identification of a smaller set of more promising variants in a subsample taken in stage 1, followed by their evaluation in an independent stage 2 subsample. To avoid the potential negative impact of genetic model misspecification on inference we incorporate genetic model selection based on posterior probabilities for each competing model. Our simulation study shows that, compared to simple random sampling that ignores genetic information from GWAS, tag-SNP-based stratified sample allocation methods reduce the number of variants continuing to stage 2 and are more likely to promote the functional sequence variant into confirmation studies. © 2014 WILEY PERIODICALS, INC.
Maurer, Willi; Jones, Byron; Chen, Ying
2018-05-10
In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one-sided test (TOST) procedure, which depends, among other things, on the estimated within-subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within-subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse-normal combination of p-values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first-stage results. This necessitates a discussion and exploitation of the peculiar properties of the power curve of the TOST testing procedure. We illustrate our approach with an example based on a real ABE study and compare the operating characteristics of our proposed method with those of Method B of Povin et al. Copyright © 2018 John Wiley & Sons, Ltd.
Users guide for noble fir bough cruiser.
Roger D. Fight; Keith A. Blatner; Roger C. Chapman; William E. Schlosser
2005-01-01
The bough cruiser spreadsheet was developed to provide a method for cruising noble fir (Abies procera Rehd.) stands to estimate the weight of boughs that might be harvested. No boughs are cut as part of the cruise process. The approach is based on a two-stage sample. The first stage consists of fixed-radius plots that are used to estimate the...
Samanipour, Saer; Baz-Lomba, Jose A; Alygizakis, Nikiforos A; Reid, Malcolm J; Thomaidis, Nikolaos S; Thomas, Kevin V
2017-06-09
LC-HR-QTOF-MS recently has become a commonly used approach for the analysis of complex samples. However, identification of small organic molecules in complex samples with the highest level of confidence is a challenging task. Here we report on the implementation of a two stage algorithm for LC-HR-QTOF-MS datasets. We compared the performances of the two stage algorithm, implemented via NIVA_MZ_Analyzer™, with two commonly used approaches (i.e. feature detection and XIC peak picking, implemented via UNIFI by Waters and TASQ by Bruker, respectively) for the suspect analysis of four influent wastewater samples. We first evaluated the cross platform compatibility of LC-HR-QTOF-MS datasets generated via instruments from two different manufacturers (i.e. Waters and Bruker). Our data showed that with an appropriate spectral weighting function the spectra recorded by the two tested instruments are comparable for our analytes. As a consequence, we were able to perform full spectral comparison between the data generated via the two studied instruments. Four extracts of wastewater influent were analyzed for 89 analytes, thus 356 detection cases. The analytes were divided into 158 detection cases of artificial suspect analytes (i.e. verified by target analysis) and 198 true suspects. The two stage algorithm resulted in a zero rate of false positive detection, based on the artificial suspect analytes while producing a rate of false negative detection of 0.12. For the conventional approaches, the rates of false positive detection varied between 0.06 for UNIFI and 0.15 for TASQ. The rates of false negative detection for these methods ranged between 0.07 for TASQ and 0.09 for UNIFI. The effect of background signal complexity on the two stage algorithm was evaluated through the generation of a synthetic signal. We further discuss the boundaries of applicability of the two stage algorithm. The importance of background knowledge and experience in evaluating the reliability of results during the suspect screening was evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hara, K.; Okuyama, E.; Yonemura, A.; Uchida, T.; Okamoto, N.
2006-09-01
The analysis of particle formation and the doping of luminescent impurities during the two-stage vapor-phase synthesis of GaN powder were carried. GaN particles were grown very fast during the second stage of this method, and the increment in particle size was larger for higher reaction temperature in the region between 800 and 1000 °C. The analysis on the behaviour of particle growth based on the reaction kinetics suggested that the growth almost finishes in a few seconds with an extremely high rate at the early stage at 1000 °C, whereas the growth lasts with relatively low rates for a time longer than the actual growth duration for the case of lower temperature synthesis. GaN powders doped with various impurity atoms were synthesized by supplying impurity sources with GaCl during the second stage. The samples doped with Zn, Mg and Tb showed emissions characteristic for each doped impurity.
A Method of Visualizing Three-Dimensional Distribution of Yeast in Bread Dough
NASA Astrophysics Data System (ADS)
Maeda, Tatsurou; Do, Gab-Soo; Sugiyama, Junichi; Oguchi, Kosei; Shiraga, Seizaburou; Ueda, Mitsuyoshi; Takeya, Koji; Endo, Shigeru
A novel technique was developed to monitor the change in three-dimensional (3D) distribution of yeast in frozen bread dough samples in accordance with the progress of mixing process. Application of a surface engineering technology allowed the identification of yeast in bread dough by bonding EGFP (Enhanced Green Fluorescent Protein) to the surface of yeast cells. The fluorescent yeast (a biomarker) was recognized as bright spots at the wavelength of 520 nm. A Micro-Slicer Image Processing System (MSIPS) with a fluorescence microscope was utilized to acquire cross-sectional images of frozen dough samples sliced at intervals of 1 μm. A set of successive two-dimensional images was reconstructed to analyze 3D distribution of yeast. Samples were taken from each of four normal mixing stages (i.e., pick up, clean up, development, and final stages) and also from over mixing stage. In the pick up stage yeast distribution was uneven with local areas of dense yeast. As the mixing progressed from clean up to final stages, the yeast became more evenly distributed throughout the dough sample. However, the uniformity in yeast distribution was lost in the over mixing stage possibly due to the breakdown of gluten structure within the dough sample.
Gao, Yi Qin
2008-04-07
Here, we introduce a simple self-adaptive computational method to enhance the sampling in energy, configuration, and trajectory spaces. The method makes use of two strategies. It first uses a non-Boltzmann distribution method to enhance the sampling in the phase space, in particular, in the configuration space. The application of this method leads to a broad energy distribution in a large energy range and a quickly converged sampling of molecular configurations. In the second stage of simulations, the configuration space of the system is divided into a number of small regions according to preselected collective coordinates. An enhanced sampling of reactive transition paths is then performed in a self-adaptive fashion to accelerate kinetics calculations.
Two-Stage Variable Sample-Rate Conversion System
NASA Technical Reports Server (NTRS)
Tkacenko, Andre
2009-01-01
A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.
Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach
NASA Astrophysics Data System (ADS)
Tsai, Bi-Huei; Chang, Chih-Huei
2009-08-01
Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.
Kinetic analysis of manure pyrolysis and combustion processes.
Fernandez-Lopez, M; Pedrosa-Castro, G J; Valverde, J L; Sanchez-Silva, L
2016-12-01
Due to the depletion of fossil fuel reserves and the environmental issues derived from their use, biomass seems to be an excellent source of renewable energy. In this work, the kinetics of the pyrolysis and combustion of three different biomass waste samples (two dairy manure samples before (Pre) and after (Dig R) anaerobic digestion and one swine manure sample (SW)) was studied by means of thermogravimetric analysis. In this work, three iso-conversional methods (Friedman, Flynn-Wall-Ozawa (FWO) and Kissinger-Akahira-Sunose (KAS)) were compared with the Coats-Redfern method. The E a values of devolatilization stages were in the range of 152-170kJ/mol, 148-178kJ/mol and 156-209kJ/mol for samples Pre, Dig R and SW, respectively. Concerning combustion process, char oxidation stages showed lower E a values than that obtained for the combustion devolatilization stage, being in the range of 140-175kJ/mol, 178-199kJ/mol and 122-144kJ/mol for samples Pre, Dig R and SW, respectively. These results were practically the same for samples Pre and Dig R, which means that the kinetics of the thermochemical processes were not affected by anaerobic digestion. Finally, the distributed activation energy model (DAEM) and the pseudo-multi component stage model (PMSM) were applied to predict the weight loss curves of pyrolysis and combustion. DAEM was the best model that fitted the experimental data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Optically-sectioned two-shot structured illumination microscopy with Hilbert-Huang processing.
Patorski, Krzysztof; Trusiak, Maciej; Tkaczyk, Tomasz
2014-04-21
We introduce a fast, simple, adaptive and experimentally robust method for reconstructing background-rejected optically-sectioned images using two-shot structured illumination microscopy. Our innovative data demodulation method needs two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement between two frames is not required. Upon frames subtraction the input pattern with increased grid modulation is obtained. The first demodulation stage comprises two-dimensional data processing based on the empirical mode decomposition for the object spatial frequency selection (noise reduction and bias term removal). The second stage consists in calculating high contrast image using the two-dimensional spiral Hilbert transform. Our algorithm effectiveness is compared with the results calculated for the same input data using structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. Results of our approach compare very favorably with SIM and HiLo techniques.
Unbeck, Maria; Schildmeijer, Kristina; Henriksson, Peter; Jürgensen, Urban; Muren, Olav; Nilsson, Lena; Pukk Härenstam, Karin
2013-04-15
There has been a theoretical debate as to which retrospective record review method is the most valid, reliable, cost efficient and feasible for detecting adverse events. The aim of the present study was to evaluate the feasibility and capability of two common retrospective record review methods, the "Harvard Medical Practice Study" method and the "Global Trigger Tool" in detecting adverse events in adult orthopaedic inpatients. We performed a three-stage structured retrospective record review process in a random sample of 350 orthopaedic admissions during 2009 at a Swedish university hospital. Two teams comprised each of a registered nurse and two physicians were assigned, one to each method. All records were primarily reviewed by registered nurses. Records containing a potential adverse event were forwarded to physicians for review in stage 2. Physicians made an independent review regarding, for example, healthcare causation, preventability and severity. In the third review stage all adverse events that were found with the two methods together were compared and all discrepancies after review stage 2 were analysed. Events that had not been identified by one of the methods in the first two review stages were reviewed by the respective physicians. Altogether, 160 different adverse events were identified in 105 (30.0%) of the 350 records with both methods combined. The "Harvard Medical Practice Study" method identified 155 of the 160 (96.9%, 95% CI: 92.9-99.0) adverse events in 104 (29.7%) records compared with 137 (85.6%, 95% CI: 79.2-90.7) adverse events in 98 (28.0%) records using the "Global Trigger Tool". Adverse events "causing harm without permanent disability" accounted for most of the observed difference. The overall positive predictive value for criteria and triggers using the "Harvard Medical Practice Study" method and the "Global Trigger Tool" was 40.3% and 30.4%, respectively. More adverse events were identified using the "Harvard Medical Practice Study" method than using the "Global Trigger Tool". Differences in review methodology, perception of less severe adverse events and context knowledge may explain the observed difference between two expert review teams in the detection of adverse events.
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
ERIC Educational Resources Information Center
Sebro, Negusse Yohannes; Goshu, Ayele Taye
2017-01-01
This study aims to explore Bayesian multilevel modeling to investigate variations of average academic achievement of grade eight school students. A sample of 636 students is randomly selected from 26 private and government schools by a two-stage stratified sampling design. Bayesian method is used to estimate the fixed and random effects. Input and…
Han, Yang; Hou, Shao-Yang; Ji, Shang-Zhi; Cheng, Juan; Zhang, Meng-Yue; He, Li-Juan; Ye, Xiang-Zhong; Li, Yi-Min; Zhang, Yi-Xuan
2017-11-15
A novel method, real-time reverse transcription PCR (real-time RT-PCR) coupled with probe-melting curve analysis, has been established to detect two kinds of samples within one fluorescence channel. Besides a conventional TaqMan probe, this method employs another specially designed melting-probe with a 5' terminus modification which meets the same label with the same fluorescent group. By using an asymmetric PCR method, the melting-probe is able to detect an extra sample in the melting stage effectively while it almost has little influence on the amplification detection. Thus, this method allows the availability of united employment of both amplification stage and melting stage for detecting samples in one reaction. The further demonstration by simultaneous detection of human immunodeficiency virus (HIV) and hepatitis C virus (HCV) in one channel as a model system is presented in this essay. The sensitivity of detection by real-time RT-PCR coupled with probe-melting analysis was proved to be equal to that detected by conventional real-time RT-PCR. Because real-time RT-PCR coupled with probe-melting analysis can double the detection throughputs within one fluorescence channel, it is expected to be a good solution for the problem of low-throughput in current real-time PCR. Copyright © 2017 Elsevier Inc. All rights reserved.
Davis, Rosemary H; Valadez, Joseph J
2014-12-01
Second-stage sampling techniques, including spatial segmentation, are widely used in community health surveys when reliable household sampling frames are not available. In India, an unresearched technique for household selection is used in eight states, which samples the house with the last marriage or birth as the starting point. Users question whether this last-birth or last-marriage (LBLM) approach introduces bias affecting survey results. We conducted two simultaneous population-based surveys. One used segmentation sampling; the other used LBLM. LBLM sampling required modification before assessment was possible and a more systematic approach was tested using last birth only. We compared coverage proportions produced by the two independent samples for six malaria indicators and demographic variables (education, wealth and caste). We then measured the level of agreement between the caste of the selected participant and the caste of the health worker making the selection. No significant difference between methods was found for the point estimates of six malaria indicators, education, caste or wealth of the survey participants (range of P: 0.06 to >0.99). A poor level of agreement occurred between the caste of the health worker used in household selection and the caste of the final participant, (Κ = 0.185), revealing little association between the two, and thereby indicating that caste was not a source of bias. Although LBLM was not testable, a systematic last-birth approach was tested. If documented concerns of last-birth sampling are addressed, this new method could offer an acceptable alternative to segmentation in India. However, inter-state caste variation could affect this result. Therefore, additional assessment of last birth is required before wider implementation is recommended. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2013; all rights reserved.
NASA Astrophysics Data System (ADS)
Guillou, H.; Carracedo, J.; Perez Torrado, F.
2003-12-01
The combined use of radioisotopic dating, magnetostratigraphy and field geology is a powerful tool to provide reliable chronological frameworks of volcanic edifices. This approach has been used to investigate the last two stages of the volcanic evolution of Gran Canaria. Fifty samples were dated using the unspiked K-Ar method and had their magnetic polarity measured both in the field and in laboratory. Ages were compared to their stratigraphic positions and magnetic polarities before accepting their validity. The unspiked K-Ar chronology constrains the timing of lateral collapses, eruption rates and the contemporaneity of different volcano-magmatic stages at Gran Canaria. Our new data set modifies significantly the previous chronological framework of Gran Canaria, especially between 4 and 2.8 Ma. Based on these new ages, we can bracket the age of the multiple lateral collapses of the Roque Nublo stratovolcano flanks between 3.5 and 3.1 Ma .This time interval corresponds to a main period of volcanic quiescence. Calculated eruptive rates during the stratovolcano edification are about 0.1 km3/kyr which is significantly lower than the published estimates. The dating also reveals that the two main last stages are not separated by a major time gap, but that the early stages of the rift forming eruption and the vanishing activity of the Roque Nublo strato-volcano were contemporaneous for at least 600 kyrs. These results support that our combined approach provides a rapid first-pass and reliable geochronology. Nevertheless, this chronology can be amplified and made more precise where necessary through detailed Ar-Ar incremental-heating methods. Samples which should be investigated using this method are the oldest and youngest K-Ar dated flows of each volcanic stage, and samples from stratigraphic sections that hold potential to study the behaviour of the earth's magnetic field during reversals (Gauss-Gilbert transition, Olduvai and Reunion events).
Effect of two-stage sintering on dielectric properties of BaTi0.9Zr0.1O3 ceramics
NASA Astrophysics Data System (ADS)
Rani, Rekha; Rani, Renu; Kumar, Parveen; Juneja, J. K.; Raina, K. K.; Prakash, Chandra
2011-09-01
The effect of two-stage sintering on the dielectric properties of BaTi0.9Zr0.1O3 ceramics prepared by solid state route was investigated and is presented here. It has been found that under suitable two-stage sintering conditions, dense BaTi0.9Zr0.1O3 ceramics with improved electrical properties can be synthesized. The density was found to have a value of 5.49 g cc-1 for normally sintered samples, whereas in the case of the two-stage sintered sample it was 5.85 g cc-1. Dielectric measurements were done as a function of frequency and temperature. A small decrease in the Curie temperature was observed with modification in dielectric loss for two-stage sintered ceramic samples.
Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R
2016-04-15
A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.
Kent, Peter; Stochkendahl, Mette Jensen; Christensen, Henrik Wulff; Kongsted, Alice
2015-01-01
Recognition of homogeneous subgroups of patients can usefully improve prediction of their outcomes and the targeting of treatment. There are a number of research approaches that have been used to recognise homogeneity in such subgroups and to test their implications. One approach is to use statistical clustering techniques, such as Cluster Analysis or Latent Class Analysis, to detect latent relationships between patient characteristics. Influential patient characteristics can come from diverse domains of health, such as pain, activity limitation, physical impairment, social role participation, psychological factors, biomarkers and imaging. However, such 'whole person' research may result in data-driven subgroups that are complex, difficult to interpret and challenging to recognise clinically. This paper describes a novel approach to applying statistical clustering techniques that may improve the clinical interpretability of derived subgroups and reduce sample size requirements. This approach involves clustering in two sequential stages. The first stage involves clustering within health domains and therefore requires creating as many clustering models as there are health domains in the available data. This first stage produces scoring patterns within each domain. The second stage involves clustering using the scoring patterns from each health domain (from the first stage) to identify subgroups across all domains. We illustrate this using chest pain data from the baseline presentation of 580 patients. The new two-stage clustering resulted in two subgroups that approximated the classic textbook descriptions of musculoskeletal chest pain and atypical angina chest pain. The traditional single-stage clustering resulted in five clusters that were also clinically recognisable but displayed less distinct differences. In this paper, a new approach to using clustering techniques to identify clinically useful subgroups of patients is suggested. Research designs, statistical methods and outcome metrics suitable for performing that testing are also described. This approach has potential benefits but requires broad testing, in multiple patient samples, to determine its clinical value. The usefulness of the approach is likely to be context-specific, depending on the characteristics of the available data and the research question being asked of it.
Fang, Yun; Wu, Hulin; Zhu, Li-Xing
2011-07-01
We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.
Kaplan, C D; Korf, D; Sterk, C
1987-09-01
Snowball sampling is a method that has been used in the social sciences to study sensitive topics, rare traits, personal networks, and social relationships. The method involves the selection of samples utilizing "insider" knowledge and referral chains among subjects who possess common traits that are of research interest. It is especially useful in generating samples for which clinical sampling frames may be difficult to obtain or are biased in some way. In this paper, snowball samples of heroin users in two Dutch cities have been analyzed for the purpose of providing descriptions and limited inferences about the temporal and social contexts of their lifestyles. Two distinct heroin-using populations have been discovered who are distinguished by their life cycle stage. Significant contextual explanations have been found involving the passage from adolescent peer group to criminal occupation, the functioning of network "knots" and "outcroppings," and the frequency of social contact. It is suggested that the snowball sampling method may have utility in studying the temporal and social contexts of other populations of clinical interest.
Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.
2004-01-01
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.
Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, withi...
Diagnosis of gastrointestinal parasites in reptiles: comparison of two coprological methods
2014-01-01
Background Exotic reptiles have become increasingly common domestic pets worldwide and are well known to be carriers of different parasites including some with zoonotic potential. The need of accurate diagnosis of gastrointestinal endoparasite infections in domestic reptiles is therefore essential, not only for the well-being of captive reptiles but also for the owners. Here, two different approaches for the detection of parasite stages in reptile faeces were compared: a combination of native and iodine stained direct smears together with a flotation technique (CNF) versus the standard SAF-method. Results A total of 59 different reptile faeces (20 lizards, 22 snakes, 17 tortoises) were coprologically analyzed by the two methods for the presence of endoparasites. Analyzed reptile faecal samples contained a broad spectrum of parasites (total occurence 93.2%, n = 55) including different species of nematodes (55.9%, n = 33), trematodes (15.3%, n = 9), pentastomids (3.4%, n = 2) and protozoans (47.5%, n = 28). Associations between the performances of both methods to detect selected single parasite stages or groups of such were evaluated by Fisher's exact test and marginal homogeneity was tested by the McNemar test. In 88.1% of all examined samples (n = 52, 95% confidence interval [CI] = 77.1 - 95.1%) the two diagnostic methods rendered differing results, and the McNemar test for paired observations showed highly significant differences of the detection frequency (P < 0.0001). Conclusion The combination of direct smears/flotation proved superior in the detection of flagellates trophozoites, coccidian oocysts and nematode eggs, especially those of oxyurids. SAF-technique was superior in detecting larval stages and trematode eggs, but this advantage failed to be statistically significant (P = 0.13). Therefore, CNF is the recommended method for routine faecal examination of captive reptiles while the SAF-technique is advisable as additional measure particularly for wild caught animals and individuals which are to be introduced into captive collections. PMID:25299119
Tiberti, Natalia; Hainard, Alexandre; Lejon, Veerle; Robin, Xavier; Ngoyi, Dieudonné Mumba; Turck, Natacha; Matovu, Enock; Enyaru, John; Ndung'u, Joseph Mathu; Scherl, Alexander; Dayon, Loïc; Sanchez, Jean-Charles
2010-01-01
Human African trypanosomiasis, or sleeping sickness, is a parasitic disease endemic in sub-Saharan Africa, transmitted to humans through the bite of a tsetse fly. The first or hemolymphatic stage of the disease is associated with presence of parasites in the bloodstream, lymphatic system, and body tissues. If patients are left untreated, parasites cross the blood-brain barrier and invade the cerebrospinal fluid and the brain parenchyma, giving rise to the second or meningoencephalitic stage. Stage determination is a crucial step in guiding the choice of treatment, as drugs used for S2 are potentially dangerous. Current staging methods, based on counting white blood cells and demonstrating trypanosomes in cerebrospinal fluid, lack specificity and/or sensitivity. In the present study, we used several proteomic strategies to discover new markers with potential for staging human African trypanosomiasis. Cerebrospinal fluid (CSF) samples were collected from patients infected with Trypanosoma brucei gambiense in the Democratic Republic of Congo. The stage was determined following the guidelines of the national control program. The proteome of the samples was analyzed by two-dimensional gel electrophoresis (n = 9), and by sixplex tandem mass tag (TMT) isobaric labeling (n = 6) quantitative mass spectrometry. Overall, 73 proteins were overexpressed in patients presenting the second stage of the disease. Two of these, osteopontin and β-2-microglobulin, were confirmed to be potential markers for staging human African trypanosomiasis (HAT) by Western blot and ELISA. The two proteins significantly discriminated between S1 and S2 patients with high sensitivity (68% and 78%, respectively) for 100% specificity, and a combination of both improved the sensitivity to 91%. The levels of osteopontin and β-2-microglobulin in CSF of S2 patients (μg/ml range), as well as the fold increased concentration in S2 compared with S1 (3.8 and 5.5 respectively) make the two markers good candidates for the development of a test for staging HAT patients. PMID:20724469
A novel acute HIV infection staging system based on 4th generation immunoassay.
Ananworanich, Jintanat; Fletcher, James L K; Pinyakorn, Suteeraporn; van Griensven, Frits; Vandergeeten, Claire; Schuetz, Alexandra; Pankam, Tippawan; Trichavaroj, Rapee; Akapirat, Siriwat; Chomchey, Nitiya; Phanuphak, Praphan; Chomont, Nicolas; Michael, Nelson L; Kim, Jerome H; de Souza, Mark
2013-05-29
Fourth generation (4thG) immunoassay (IA) is becoming the standard HIV screening method but was not available when the Fiebig acute HIV infection (AHI) staging system was proposed. Here we evaluated AHI staging based on a 4thG IA (4thG staging). Screening for AHI was performed in real-time by pooled nucleic acid testing (NAT, n=48,828 samples) and sequential enzyme immunoassay (EIA, n=3,939 samples) identifying 63 subjects with non-reactive 2nd generation EIA (Fiebig stages I (n=25), II (n=7), III (n=29), IV (n=2)). The majority of samples tested (n=53) were subtype CRF_01AE (77%). NAT+ subjects were re-staged into three 4thG stages: stage 1 (n=20; 4th gen EIA-, 3rd gen EIA-), stage 2 (n=12; 4th gen EIA+, 3rd gen EIA-), stage 3 (n=31; 4th gen EIA+, 3rd gen EIA+, Western blot-/indeterminate). 4thG staging distinguishes groups of AHI subjects by time since presumed HIV exposure, pattern of CD8+ T, B and natural killer cell absolute numbers, and HIV RNA and DNA levels. This staging system further stratified Fiebig I subjects: 18 subjects in 4thG stage 1 had lower HIV RNA and DNA levels than 7 subjects in 4thG stage 2. Using 4th generation IA as part of AHI staging distinguishes groups of patients by time since exposure to HIV, lymphocyte numbers and HIV viral burden. It identifies two groups of Fiebig stage I subjects who display different levels of HIV RNA and DNA, which may have implication for HIV cure. 4th generation IA should be incorporated into AHI staging systems.
Batch Isolation of Microsatellites for Tropical Plant Species Pyrosequencing
USDA-ARS?s Scientific Manuscript database
Microsatellites were developed for ten tropical species using a method recently developed in our laboratory that involves a combination of two adapters at the SSR-enrichment stage and allows for cost saving and simultaneous loading of samples. The species for which microsatellites were isolated are...
Magnetic resonance elastography is as accurate as liver biopsy for liver fibrosis staging.
Morisaka, Hiroyuki; Motosugi, Utaroh; Ichikawa, Shintaro; Nakazawa, Tadao; Kondo, Tetsuo; Funayama, Satoshi; Matsuda, Masanori; Ichikawa, Tomoaki; Onishi, Hiroshi
2018-05-01
Liver MR elastography (MRE) is available for the noninvasive assessment of liver fibrosis; however, no previous studies have compared the diagnostic ability of MRE with that of liver biopsy. To compare the diagnostic accuracy of liver fibrosis staging between MRE-based methods and liver biopsy using the resected liver specimens as the reference standard. A retrospective study at a single institution. In all, 200 patients who underwent preoperative MRE and subsequent surgical liver resection were included in this study. Data from 80 patients were used to estimate cutoff and distributions of liver stiffness values measured by MRE for each liver fibrosis stage (F0-F4, METAVIR system). In the remaining 120 patients, liver biopsy specimens were obtained from the resected liver tissues using a standard biopsy needle. 2D liver MRE with gradient-echo based sequence on a 1.5 or 3T scanner was used. Two radiologists independently measured the liver stiffness value on MRE and two types of MRE-based methods (threshold and Bayesian prediction method) were applied. Two pathologists evaluated all biopsy samples independently to stage liver fibrosis. Surgically resected whole tissue specimens were used as the reference standard. The accuracy for liver fibrosis staging was compared between liver biopsy and MRE-based methods with a modified McNemar's test. Accurate fibrosis staging was achieved in 53.3% (64/120) and 59.1% (71/120) of patients using MRE with threshold and Bayesian methods, respectively, and in 51.6% (62/120) with liver biopsy. Accuracies of MRE-based methods for diagnoses of ≥F2 (90-91% [108-9/120]), ≥F3 (79-81% [95-97/120]), and F4 (82-85% [98-102/120]) were statistically equivalent to those of liver biopsy (≥F2, 79% [95/120], P ≤ 0.01; ≥F3, 88% [105/120], P ≤ 0.006; and F4, 82% [99/120], P ≤ 0.017). MRE can be an alternative to liver biopsy for fibrosis staging. 3. Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:1268-1275. © 2017 International Society for Magnetic Resonance in Medicine.
Hernández-Morera, Pablo; Castaño-González, Irene; Travieso-González, Carlos M.; Mompeó-Corredera, Blanca; Ortega-Santana, Francisco
2016-01-01
Purpose To develop a digital image processing method to quantify structural components (smooth muscle fibers and extracellular matrix) in the vessel wall stained with Masson’s trichrome, and a statistical method suitable for small sample sizes to analyze the results previously obtained. Methods The quantification method comprises two stages. The pre-processing stage improves tissue image appearance and the vessel wall area is delimited. In the feature extraction stage, the vessel wall components are segmented by grouping pixels with a similar color. The area of each component is calculated by normalizing the number of pixels of each group by the vessel wall area. Statistical analyses are implemented by permutation tests, based on resampling without replacement from the set of the observed data to obtain a sampling distribution of an estimator. The implementation can be parallelized on a multicore machine to reduce execution time. Results The methods have been tested on 48 vessel wall samples of the internal saphenous vein stained with Masson’s trichrome. The results show that the segmented areas are consistent with the perception of a team of doctors and demonstrate good correlation between the expert judgments and the measured parameters for evaluating vessel wall changes. Conclusion The proposed methodology offers a powerful tool to quantify some components of the vessel wall. It is more objective, sensitive and accurate than the biochemical and qualitative methods traditionally used. The permutation tests are suitable statistical techniques to analyze the numerical measurements obtained when the underlying assumptions of the other statistical techniques are not met. PMID:26761643
Wason, James M. S.; Mander, Adrian P.
2012-01-01
Two-stage designs are commonly used for Phase II trials. Optimal two-stage designs have the lowest expected sample size for a specific treatment effect, for example, the null value, but can perform poorly if the true treatment effect differs. Here we introduce a design for continuous treatment responses that minimizes the maximum expected sample size across all possible treatment effects. The proposed design performs well for a wider range of treatment effects and so is useful for Phase II trials. We compare the design to a previously used optimal design and show it has superior expected sample size properties. PMID:22651118
Nanoliter hemolymph sampling and analysis of individual adult Drosophila melanogaster.
Piyankarage, Sujeewa C; Featherstone, David E; Shippy, Scott A
2012-05-15
The fruit fly (Drosophila melanogaster) is an extensively used and powerful, genetic model organism. However, chemical studies using individual flies have been limited by the animal's small size. Introduced here is a method to sample nanoliter hemolymph volumes from individual adult fruit-flies for chemical analysis. The technique results in an ability to distinguish hemolymph chemical variations with developmental stage, fly sex, and sampling conditions. Also presented is the means for two-point monitoring of hemolymph composition for individual flies.
Mariaux, Sandrine; Tafin, Ulrika Furustrand; Borens, Olivier
2017-01-01
Introduction: When treating periprosthetic joint infections with a two-stage procedure, antibiotic-impregnated spacers are used in the interval between removal of prosthesis and reimplantation. According to our experience, cultures of sonicated spacers are most often negative. The objective of our study was to investigate whether PCR analysis would improve the detection of bacteria in the spacer sonication fluid. Methods: A prospective monocentric study was performed from September 2014 to January 2016. Inclusion criteria were two-stage procedure for prosthetic infection and agreement of the patient to participate in the study. Beside tissues samples and sonication, broad range bacterial PCRs, specific S. aureus PCRs and Unyvero-multiplex PCRs were performed on the sonicated spacer fluid. Results: 30 patients were identified (15 hip, 14 knee and 1 ankle replacements). At reimplantation, cultures of tissue samples and spacer sonication fluid were all negative. Broad range PCRs were all negative. Specific S. aureus PCRs were positive in 5 cases. We had two persistent infections and four cases of infection recurrence were observed, with bacteria different than for the initial infection in three cases. Conclusion: The three different types of PCRs did not detect any bacteria in spacer sonication fluid that was culture-negative. In our study, PCR did not improve the bacterial detection and did not help to predict whether the patient will present a persistent or recurrent infection. Prosthetic 2-stage exchange with short interval and antibiotic-impregnated spacer is an efficient treatment to eradicate infection as both culture- and molecular-based methods were unable to detect bacteria in spacer sonication fluid after reimplantation.
Estimating forest characteristics using NAIP imagery and ArcObjects
John S Hogland; Nathaniel M. Anderson; Woodam Chung; Lucas Wells
2014-01-01
Detailed, accurate, efficient, and inexpensive methods of estimating basal area, trees, and aboveground biomass per acre across broad extents are needed to effectively manage forests. In this study we present such a methodology using readily available National Agriculture Imagery Program imagery, Forest Inventory Analysis samples, a two stage classification and...
Flagging versus dragging as sampling methods for nymphal Ixodes scapularis (Acari: Ixodidae)
Rulison, Eric L.; Kuczaj, Isis; Pang, Genevieve; Hickling, Graham J.; Tsao, Jean I.; Ginsberg, Howard S.
2013-01-01
The nymphal stage of the blacklegged tick, Ixodes scapularis (Acari: Ixodidae), is responsible for most transmission of Borrelia burgdorferi, the etiologic agent of Lyme disease, to humans in North America. From 2010 to fall of 2012, we compared two commonly used techniques, flagging and dragging, as sampling methods for nymphal I. scapularis at three sites, each with multiple sampling arrays (grids), in the eastern and central United States. Flagging and dragging collected comparable numbers of nymphs, with no consistent differences between methods. Dragging collected more nymphs than flagging in some samples, but these differences were not consistent among sites or sampling years. The ratio of nymphs collected by flagging vs dragging was not significantly related to shrub density, so habitat type did not have a strong effect on the relative efficacy of these methods. Therefore, although dragging collected more ticks in a few cases, the numbers collected by each method were so variable that neither technique had a clear advantage for sampling nymphal I. scapularis.
On the Bur Gheluai H5 chondrite and other meteorites with complex exposure histories
NASA Technical Reports Server (NTRS)
Vogt, S. K.; Aylmer, D.; Herzog, G. F.; Wieler, R.; Signer, P.; Pellas, P.; Fieni, C.; Tuniz, C.; Jull, A. J. T.; Fink, D.
1993-01-01
Isotopic concentrations and track densities measured in 13 samples of the Bur Gheluai meteorite fall are presented. Experimental methods are described and results are presented for isotopic ratios of noble gases and cosmogenic radionuclide contents. Evidence for complex irradiation is discussed and a model for two-stage exposure histories is presented. The duration of each irradiation stage and possible effects on isotope production rates are considered. Explanations are suggested for the discrepant Ne production rates.
Assessing forest windthrow damage using single-date, post-event airborne laser scanning data
Gherardo Chirici; Francesca Bottalico; Francesca Giannetti; Barbara Del Perugia; Davide Travaglini; Susanna Nocentini; Erico Kutchartt; Enrico Marchi; Cristiano Foderi; Marco Fioravanti; Lorenzo Fattorini; Lorenzo Bottai; Ronald McRoberts; Erik Næsset; Piermaria Corona; Bernardo Gozzini
2017-01-01
One of many possible climate change effects in temperate areas is the increase of frequency and severity of windstorms; thus, fast and cost efficient new methods are needed to evaluate wind-induced damages in forests. We present a method for assessing windstorm damages in forest landscapes based on a two-stage sampling strategy using single-date, post-event airborne...
Pyregov, A V; Ovechkin, A Iu; Petrov, S V
2012-01-01
Results of prospective randomized comparative research of 2 total hemoglobin estimation methods are presented. There were laboratory tests and continuous noninvasive technique with multiwave spectrophotometry on the Masimo Rainbow SET. Research was carried out in two stages. At the 1st stage (gynecology)--67 patients were included and in second stage (obstetrics)--44 patients during and after Cesarean section. The standard deviation of noninvasive total hemoglobin estimation from absolute values (invasive) was 7.2 and 4.1%, an standard deviation in a sample--5.2 and 2.7 % in gynecologic operations and surgical delivery respectively, that confirms lack of reliable indicators differences. The method of continuous noninvasive total hemoglobin estimation with multiwave spectrophotometry on the Masimo Rainbow SET technology can be recommended for use in obstetrics and gynecology.
A Bayesian predictive two-stage design for phase II clinical trials.
Sambucini, Valeria
2008-04-15
In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.
Geislinger, Thomas M; Chan, Sherwin; Moll, Kirsten; Wixforth, Achim; Wahlgren, Mats; Franke, Thomas
2014-09-20
Understanding of malaria pathogenesis caused by Plasmodium falciparum has been greatly deepened since the introduction of in vitro culture system, but the lack of a method to enrich ring-stage parasites remains a technical challenge. Here, a novel way to enrich red blood cells containing parasites in the early ring stage is described and demonstrated. A simple, straight polydimethylsiloxane microchannel connected to two syringe pumps for sample injection and two height reservoirs for sample collection is used to enrich red blood cells containing parasites in the early ring stage (8-10 h p.i.). The separation is based on the non-inertial hydrodynamic lift effect, a repulsive cell-wall interaction that enables continuous and label-free separation with deformability as intrinsic marker. The possibility to enrich red blood cells containing P. falciparum parasites at ring stage with a throughput of ~12,000 cells per hour and an average enrichment factor of 4.3 ± 0.5 is demonstrated. The method allows for the enrichment of red blood cells early after the invasion by P. falciparum parasites continuously and without any need to label the cells. The approach promises new possibilities to increase the sensitivity of downstream analyses like genomic- or diagnostic tests. The device can be produced as a cheap, disposable chip with mass production technologies and works without expensive peripheral equipment. This makes the approach interesting for the development of new devices for field use in resource poor settings and environments, e.g. with the aim to increase the sensitivity of microscope malaria diagnosis.
Brewer, S.K.; Rabeni, C.F.; Papoulias, D.M.
2008-01-01
We compared gonadosomatic index (GSI) and histological analysis of ovaries for identifying reproductive periods of fishes to determine the validity of using GSI in future studies. Four small-bodied riverine species were examined in our comparison of the two methods. Mean GSI was significantly different between all histological stages for suckermouth minnow and red shiner. Mean GSI was significantly different between most stages for slenderhead darter; whereas stages 3 and 6 were not significantly different, the time period when these stages are present would allow fisheries biologists to distinguish between the two stages. Mean GSI was not significantly different for many histological stages in stonecat. Difficulties in distinguishing between histological stages and GSI associated with stonecat illustrate potential problems obtaining appropriate sample sizes from species that move to alternative habitats to spawn. We suggest that GSI would be a useful tool in identifying mature ovaries in many small-bodied, multiple-spawning fishes. This information could be combined with data from histology during mature periods to pinpoint specific spawning events. ?? 2007 Blackwell Munksgaard.
Estimating accuracy of land-cover composition from two-stage cluster sampling
Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.
2009-01-01
Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.
NASA Astrophysics Data System (ADS)
Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.
2018-01-01
Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.
Dai, Hongying; Wu, Guodong; Wu, Michael; Zhi, Degui
2016-01-01
Next-generation sequencing data pose a severe curse of dimensionality, complicating traditional "single marker-single trait" analysis. We propose a two-stage combined p-value method for pathway analysis. The first stage is at the gene level, where we integrate effects within a gene using the Sequence Kernel Association Test (SKAT). The second stage is at the pathway level, where we perform a correlated Lancaster procedure to detect joint effects from multiple genes within a pathway. We show that the Lancaster procedure is optimal in Bahadur efficiency among all combined p-value methods. The Bahadur efficiency,[Formula: see text], compares sample sizes among different statistical tests when signals become sparse in sequencing data, i.e. ε →0. The optimal Bahadur efficiency ensures that the Lancaster procedure asymptotically requires a minimal sample size to detect sparse signals ([Formula: see text]). The Lancaster procedure can also be applied to meta-analysis. Extensive empirical assessments of exome sequencing data show that the proposed method outperforms Gene Set Enrichment Analysis (GSEA). We applied the competitive Lancaster procedure to meta-analysis data generated by the Global Lipids Genetics Consortium to identify pathways significantly associated with high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, triglycerides, and total cholesterol.
Sampling Methods in Cardiovascular Nursing Research: An Overview.
Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie
2014-01-01
Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.
A User’s Guide to BISAM (BIvariate SAMple): The Bivariate Data Modeling Program.
1983-08-01
method for the null case specified and is then used to form the bivariate density-quantile function as described in section 4. If D(U) in stage...employed assigns average ranks for tied observations. Other methods for assigning ranks to tied observations are often employed but are not attempted...34 €.. . . . .. . .. . . . ,.. . ,•. . . ... *.., .. , - . . . . - - . . .. - -. .. observations will weaken the results obtained since underlying continuous distributions are assumed. One should avoid such situations if possible. Two methods
Dual-stage trapped-flux magnet cryostat for measurements at high magnetic fields
Islam, Zahirul; Das, Ritesh K.; Weinstein, Roy
2015-04-14
A method and a dual-stage trapped-flux magnet cryostat apparatus are provided for implementing enhanced measurements at high magnetic fields. The dual-stage trapped-flux magnet cryostat system includes a trapped-flux magnet (TFM). A sample, for example, a single crystal, is adjustably positioned proximate to the surface of the TFM, using a translation stage such that the distance between the sample and the surface is selectively adjusted. A cryostat is provided with a first separate thermal stage provided for cooling the TFM and with a second separate thermal stage provided for cooling sample.
Duration of Sleep and ADHD Tendency among Adolescents in China
ERIC Educational Resources Information Center
Lam, Lawrence T.; Yang, L.
2008-01-01
Objective: This study investigates the association between duration of sleep and ADHD tendency among adolescents. Method: This population-based health survey uses a two-stage random cluster sampling design. Participants ages 13 to 17 are recruited from the total population of adolescents attending high school in one city of China. Duration of…
Relationship between dental calcification and skeletal maturation in a Peruvian sample
Lecca-Morales, Rocío M.; Carruitero, Marcos J.
2017-01-01
ABSTRACT Objective: the objective of the study was to determine the relationship between dental calcification stages and skeletal maturation in a Peruvian sample. Methods: panoramic, cephalometric and carpal radiographs of 78 patients (34 girls and 44 boys) between 7 and 17 years old (9.90 ± 2.5 years) were evaluated. Stages of tooth calcification of the mandibular canine, first premolar, second premolar, and second molar and the skeletal maturation with a hand-wrist and a cervical vertebrae method were assessed. The relationships between the stages were assessed using Spearman’s correlation coefficient. Additionally, the associations of mandibular and pubertal growth peak stages with tooth calcification were evaluated by Fisher’s exact test. Results: all teeth showed positive and statistically significant correlations, the highest correlation was between the mandibular second molar calcification stages with hand-wrist maturation stages (r = 0.758, p < 0.001) and with vertebrae cervical maturation stages (r = 0.605, p < 0.001). The pubertal growth spurt was found in the G stage of calcification of the second mandibular molar, and the mandibular growth peak was found in the F stage of calcification of the second molar. Conclusion: there was a positive relationship between dental calcification stages and skeletal maturation stages by hand-wrist and cervical vertebrae methods in the sample studied. Dental calcification stages of the second mandibular molar showed the highest positive correlation with the hand-wrist and cervical vertebrae stages. PMID:28746492
The new ISO 6579-1: A real horizontal standard for detection of Salmonella, at last!
Mooijman, Kirsten A
2018-05-01
Up to 2016, three international standard methods existed for the detection of Salmonella spp. in food, animal feed and samples from the primary production stage: ISO 6785:2001 for milk and milk products, ISO 6579:2002 for (other) food and animal feed and Annex D of ISO 6579:2007 for samples from the primary production stage. In 2009, an ISO/CEN working group started with the revision of ISO 6579:2002 with two main aims: combining the three aforementioned standards in one document and improving the information in ISO 6579:2002. Additionally it was decided to split ISO 6579 into three parts, where part 1 describes the detection, part 2 the enumeration by mini-MPN (published in 2012) and part 3 the serotyping of Salmonella (published in 2014). This paper describes the experiments and choices made for improving the part on detection of Salmonella (ISO 6579-1). The final voting stage on (draft) ISO 6579-1 was finished by the end of December 2016, with a positive outcome. Finally, a real horizontal standard became available for detection of Salmonella in food, animal feed, environmental samples in the area of food production and food handling and in samples from the primary production stage in 2017. Copyright © 2017 Elsevier Ltd. All rights reserved.
Athar Masood, M; Veenstra, Timothy D
2017-08-26
Urine Drug Testing (UDT) is an important analytical/bio-analytical technique that has inevitably become an integral and vital part of a testing program for diagnostic purposes. This manuscript presents a tailor-made LC-MS/MS quantitative assay method development and validation for a custom group of 33 pain panel drugs and their metabolites belonging to different classes (opiates, opioids, benzodiazepines, illicit, amphetamines, etc.) that are prescribed in pain management and depressant therapies. The LC-MS/MS method incorporates two experiments to enhance the sensitivity of the assay and has a run time of about 7 min. with no prior purification of the samples required and a flow rate of 0.7 mL/min. The method also includes the second stage metabolites for some drugs that belong to different classes but have first stage similar metabolic pathways that will enable to correctly identify the right drug or to flag the drug that might be due to specimen tampering. Some real case examples and difficulties in peak picking were provided with some of the analytes in subject samples. Finally, the method was deliberated with some randomly selected de-identified clinical subject samples, and the data evaluated from "direct dilute and shoot analysis" and after "glucuronide hydrolysis" were compared. This method is now used to run routinely more than 100 clinical subjects samples on a daily basis. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Primary cultures of astrocytes from fetal bovine brain.
Ballarin, Cristina; Peruffo, Antonella
2012-01-01
We describe here a method to obtain primary cell cultures from the cerebral cortex and the hypothalamus of bovine fetuses. We report how tissue origin, developmental stages, and culture medium conditions influence cell differentiation and the prevalence of glial cells vs. neurons. We compare explants from early, middle, and late stages of development and two different fetal calf serum concentrations (1 and 10%) to identify the best conditions to obtain and grow viable astrocytes in culture. In addition, we describe how to cryopreserve and obtain viable cortical astrocytes from frozen fetal bovine brain samples.
A multi-stage drop-the-losers design for multi-arm clinical trials.
Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher
2017-02-01
Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.
A two-dimensional biased coin design for dual-agent dose-finding trials.
Sun, Zhichao; Braun, Thomas M
2015-12-01
Given the limited efficacy observed with single agents, there is growing interest in Phase I clinical trial designs that allow for identification of the maximum tolerated combination of two agents. Existing parametric designs may suffer from over- or under-parameterization. Thus, we have designed a nonparametric approach that can be easily understood and implemented for combination trials. We propose a two-stage adaptive biased coin design that extends existing methods for single-agent trials to dual-agent dose-finding trials. The basic idea of our design is to divide the entire trial into two stages and apply the biased coin design, with modification, in each stage. We compare the operating characteristics of our design to four competing parametric approaches via simulation in several numerical examples. Under all simulation scenarios we have examined, our method performs well in terms of identification of the maximum tolerated combination and allocation of patients relative to the performance of its competitors. In our design, stopping rule criteria and the distribution of the total sample size among the two stages are context-dependent, and both need careful consideration before adopting our design in practice. Efficacy is not a part of the dose-assignment algorithm, nor used to define the maximum tolerated combination. Our design inherits the favorable statistical properties of the biased coin design, is competitive with existing designs, and promotes patient safety by limiting patient exposure to toxic combinations whenever possible. © The Author(s) 2015.
BLIND ordering of large-scale transcriptomic developmental timecourses.
Anavy, Leon; Levin, Michal; Khair, Sally; Nakanishi, Nagayasu; Fernandez-Valverde, Selene L; Degnan, Bernard M; Yanai, Itai
2014-03-01
RNA-Seq enables the efficient transcriptome sequencing of many samples from small amounts of material, but the analysis of these data remains challenging. In particular, in developmental studies, RNA-Seq is challenged by the morphological staging of samples, such as embryos, since these often lack clear markers at any particular stage. In such cases, the automatic identification of the stage of a sample would enable previously infeasible experimental designs. Here we present the 'basic linear index determination of transcriptomes' (BLIND) method for ordering samples comprising different developmental stages. The method is an implementation of a traveling salesman algorithm to order the transcriptomes according to their inter-relationships as defined by principal components analysis. To establish the direction of the ordered samples, we show that an appropriate indicator is the entropy of transcriptomic gene expression levels, which increases over developmental time. Using BLIND, we correctly recover the annotated order of previously published embryonic transcriptomic timecourses for frog, mosquito, fly and zebrafish. We further demonstrate the efficacy of BLIND by collecting 59 embryos of the sponge Amphimedon queenslandica and ordering their transcriptomes according to developmental stage. BLIND is thus useful in establishing the temporal order of samples within large datasets and is of particular relevance to the study of organisms with asynchronous development and when morphological staging is difficult.
Gracco, Antonio; Bruno, Giovanni; De Stefani, Alberto; Siviero, Laura; Perri, Alessandro; Stellini, Edoardo
Recently a classification of patient's skeletal age based on the phalanx maturation, The Middle Phalanx Maturation of the third finger (MPM) method, was suggested. The aim of this study is to evaluate if there is a difference in MPM between the right and left hand. Two hundred fifty-four patients were obtained from the Complex Operating Unit of Orthodontics of Padua University Hospital. The total sample size has been selected by appropriate statistical calculations resulting in 130 patients. It was decided to further double the sample size of a previous study to ensure a robust statistical analysis. Radiographs of the right and left were obtained using the MPM method. Stages were compared using the right hand as a reference. The statistical analysis (Fisher exact test) was performed for the entire sample and related to gender in order to compare the right and the left hand stages. In MPS2, 6 out 49 (12.2%) males and 7 out 27 females (25.9%) showed MPS3 in the left hand (p-value < 0.05). In all other stages, a total agreement (100%) was found. The authors confirm the use of the right hand as reference. In patients with MPS2 an additional radiograph on the left hand can be taken in order to increase the diagnostic accuracy. In all other stages other radiographs are not needed as a total agreement between the right and left hand was found.
In 't Veld, P H; van der Laak, L F J; van Zon, M; Biesta-Peters, E G
2018-04-12
A method for the quantification of the Bacillus cereus emetic toxin (cereulide) was developed and validated. The method principle is based on LC-MS as this is the most sensitive and specific method for cereulide. Therefore the study design is different from the microbiological methods validated under this mandate. As the method had to be developed a two stage validation study approach was used. The first stage (pre-study) focussed on the method applicability and the experience of the laboratories with the method. Based on the outcome of the pre-study and comments received during voting at CEN and ISO level a final method was agreed to be used for the second stage the (final) validation of the method. In the final (validation) study samples of cooked rice (both artificially contaminated with cereulide or contaminated with B. cereus for production of cereulide in the rice) and 6 other food matrices (fried rice dish, cream pastry with chocolate, hotdog sausage, mini pancakes, vanilla custard and infant formula) were used. All these samples were spiked by the participating laboratories using standard solutions of cereulide supplied by the organising laboratory. The results of the study indicate that the method is fit for purpose. Repeatability values were obtained of 0.6 μg/kg at low level spike (ca. 5 μg/kg) and 7 to 9.6 μg/kg at high level spike (ca. 75 μg/kg). Reproducibility at low spike level ranged from 0.6 to 0.9 μg/kg and from 8.7 to 14.5 μg/kg at high spike level. Recovery from the spiked samples ranged between 96.5% for mini-pancakes to 99.3% for fries rice dish. Copyright © 2018. Published by Elsevier B.V.
Hierarchical screening for multiple mental disorders.
Batterham, Philip J; Calear, Alison L; Sunderland, Matthew; Carragher, Natacha; Christensen, Helen; Mackinnon, Andrew J
2013-10-01
There is a need for brief, accurate screening when assessing multiple mental disorders. Two-stage hierarchical screening, consisting of brief pre-screening followed by a battery of disorder-specific scales for those who meet diagnostic criteria, may increase the efficiency of screening without sacrificing precision. This study tested whether more efficient screening could be gained using two-stage hierarchical screening than by administering multiple separate tests. Two Australian adult samples (N=1990) with high rates of psychopathology were recruited using Facebook advertising to examine four methods of hierarchical screening for four mental disorders: major depressive disorder, generalised anxiety disorder, panic disorder and social phobia. Using K6 scores to determine whether full screening was required did not increase screening efficiency. However, pre-screening based on two decision tree approaches or item gating led to considerable reductions in the mean number of items presented per disorder screened, with estimated item reductions of up to 54%. The sensitivity of these hierarchical methods approached 100% relative to the full screening battery. Further testing of the hierarchical screening approach based on clinical criteria and in other samples is warranted. The results demonstrate that a two-phase hierarchical approach to screening multiple mental disorders leads to considerable increases efficiency gains without reducing accuracy. Screening programs should take advantage of prescreeners based on gating items or decision trees to reduce the burden on respondents. © 2013 Elsevier B.V. All rights reserved.
Online Low-Rank Representation Learning for Joint Multi-subspace Recovery and Clustering.
Li, Bo; Liu, Risheng; Cao, Junjie; Zhang, Jie; Lai, Yu-Kun; Liua, Xiuping
2017-10-06
Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods.
A Bayesian-frequentist two-stage single-arm phase II clinical trial design.
Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen
2012-08-30
It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.
Comparison of methods for sampling plant bugs on cotton in South Texas (2010)
USDA-ARS?s Scientific Manuscript database
A total of 26 cotton fields were sampled by experienced and inexperienced samplers at 3 growth stages using 5 methods to compare the most efficient and accurate method for sampling plant bugs in cotton. Each of the 5 methods had its own distinct advantages and disadvantages as a sampling method (too...
Trakinienė, Giedrė; Smailienė, Dalia; Kučiauskienė, Ainė
2016-08-01
The objective of this study was to evaluate whether the calcification stages of maxillary canine, mandibular second molar, and mandibular third molar can be used for assessment of growth phase. The study group consisted of 274 subjects. Pre-treatment digital panoramic and lateral cephalometric radiographs of the patients were analysed. The patients' age was ranging from 7 to 19 years. Right maxillary canine, mandibular second molar and third molar were used as a sample. The teeth mineralization was assessed using modification of Gleiser and Hunt method. The skeletal maturation was assessed by the cervical vertebrae maturation (CVM) method. A significant association was found between CVM stage 2 and maxillary canine (UC) stage 4, mandibular second molar (LM2) stage 4, and mandibular third molar (LM3) stage 1. CVM stage 3 corresponded with UC stage 5, LM2 stage 5, LM3 stage 2. CVM stage 4 matched with UC stage 5, LM2 stage 6 and LM3 stage 3. The highest correlations between CVM and calcification stages were in the group of the maxillary canine (r = 0.812, P < 0.01) and mandibular second molar (r = 0.824, P < 0.01). Limitation of our study was that the study sample was not very big and the distribution value in the groups was very high, so it was impossible to check more statistical parameters. The calcification stages of UC, LM2, and LM3 as indicators of skeletal maturity could be clinically used with caution, until this method is verified with a larger sample group. © The Author 2015. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Comparison of the King’s and MiToS staging systems for ALS
Fang, Ton; Al Khleifat, Ahmad; Stahl, Daniel R; Lazo La Torre, Claudia; Murphy, Caroline; Young, Carolyn; Shaw, Pamela J; Leigh, P Nigel; Al-Chalabi, Ammar
2017-01-01
Abstract Objective: To investigate and compare two ALS staging systems, King’s clinical staging and Milano-Torino (MiToS) functional staging, using data from the LiCALS phase III clinical trial (EudraCT 2008-006891-31). Methods: Disease stage was derived retrospectively for each system from the ALS Functional Rating Scale-Revised subscores using standard methods. The two staging methods were then compared for timing of stages using box plots, correspondence using chi-square tests, agreement using a linearly weighted kappa coefficient and concordance using Spearman’s rank correlation. Results: For both systems, progressively higher stages occurred at progressively later proportions of the disease course, but the distribution differed between the two methods. King’s stage 3 corresponded to MiToS stage 1 most frequently, with earlier King’s stages 1 and 2 largely corresponding to MiToS stage 0 or 1. The Spearman correlation was 0.54. There was fair agreement between the two systems with kappa coefficient of 0.21. Conclusion: The distribution of timings shows that the two systems are complementary, with King’s staging showing greatest resolution in early to mid-disease corresponding to clinical or disease burden, and MiToS staging having higher resolution for late disease, corresponding to functional involvement. We therefore propose using both staging systems when describing ALS. PMID:28054828
NASA Astrophysics Data System (ADS)
Brackmann, Christian; Gabrielsson, Britt; Svedberg, Fredrik; Holmäng, Agneta; Sandberg, Ann-Sofie; Enejder, Annika
2010-11-01
Hallmarks of high-fat Western diet intake, such as excessive lipid accumulation in skeletal muscle and liver as well as liver fibrosis, are investigated in tissues from mice using nonlinear microscopy, second harmonic generation (SHG), and coherent anti-Stokes Raman scattering (CARS), supported by conventional analysis methods. Two aspects are presented; intake of standard chow versus Western diet, and a comparison between two high-fat Western diets of different polyunsaturated lipid content. CARS microscopy images of intramyocellular lipid droplets in muscle tissue show an increased amount for Western diet compared to standard diet samples. Even stronger diet impact is found for liver samples, where combined CARS and SHG microscopy visualize clear differences in lipid content and collagen fiber development, the latter indicating nonalcoholic fatty liver disease (NAFLD) and steatohepatitis induced at a relatively early stage for Western diet. Characteristic for NAFLD, the fibrous tissue-containing lipids accumulate in larger structures. This is also observed in CARS images of liver samples from two Western-type diets of different polyunsaturated lipid contents. In summary, nonlinear microscopy has strong potential (further promoted by technical advances toward clinical use) for detection and characterization of steatohepatitis already in its early stages.
Brackmann, Christian; Gabrielsson, Britt; Svedberg, Fredrik; Holmaang, Agneta; Sandberg, Ann-Sofie; Enejder, Annika
2010-01-01
Hallmarks of high-fat Western diet intake, such as excessive lipid accumulation in skeletal muscle and liver as well as liver fibrosis, are investigated in tissues from mice using nonlinear microscopy, second harmonic generation (SHG), and coherent anti-Stokes Raman scattering (CARS), supported by conventional analysis methods. Two aspects are presented; intake of standard chow versus Western diet, and a comparison between two high-fat Western diets of different polyunsaturated lipid content. CARS microscopy images of intramyocellular lipid droplets in muscle tissue show an increased amount for Western diet compared to standard diet samples. Even stronger diet impact is found for liver samples, where combined CARS and SHG microscopy visualize clear differences in lipid content and collagen fiber development, the latter indicating nonalcoholic fatty liver disease (NAFLD) and steatohepatitis induced at a relatively early stage for Western diet. Characteristic for NAFLD, the fibrous tissue-containing lipids accumulate in larger structures. This is also observed in CARS images of liver samples from two Western-type diets of different polyunsaturated lipid contents. In summary, nonlinear microscopy has strong potential (further promoted by technical advances toward clinical use) for detection and characterization of steatohepatitis already in its early stages.
Guided transect sampling - a new design combining prior information and field surveying
Anna Ringvall; Goran Stahl; Tomas Lamas
2000-01-01
Guided transect sampling is a two-stage sampling design in which prior information is used to guide the field survey in the second stage. In the first stage, broad strips are randomly selected and divided into grid-cells. For each cell a covariate value is estimated from remote sensing data, for example. The covariate is the basis for subsampling of a transect through...
NASA Astrophysics Data System (ADS)
McKean, John R.; Johnson, Donn; Taylor, R. Garth
2003-04-01
An alternate travel cost model is applied to an on-site sample to estimate the value of flat water recreation on the impounded lower Snake River. Four contiguous reservoirs would be eliminated if the dams are breached to protect endangered Pacific salmon and steelhead trout. The empirical method applies truncated negative binomial regression with adjustment for endogenous stratification. The two-stage decision model assumes that recreationists allocate their time among work and leisure prior to deciding among consumer goods. The allocation of time and money among goods in the second stage is conditional on the predetermined work time and income. The second stage is a disequilibrium labor market which also applies if employers set work hours or if recreationists are not in the labor force. When work time is either predetermined, fixed by contract, or nonexistent, recreationists must consider separate prices and budgets for time and money.
Optimality, sample size, and power calculations for the sequential parallel comparison design.
Ivanova, Anastasia; Qaqish, Bahjat; Schoenfeld, David A
2011-10-15
The sequential parallel comparison design (SPCD) has been proposed to increase the likelihood of success of clinical trials in therapeutic areas where high-placebo response is a concern. The trial is run in two stages, and subjects are randomized into three groups: (i) placebo in both stages; (ii) placebo in the first stage and drug in the second stage; and (iii) drug in both stages. We consider the case of binary response data (response/no response). In the SPCD, all first-stage and second-stage data from placebo subjects who failed to respond in the first stage of the trial are utilized in the efficacy analysis. We develop 1 and 2 degree of freedom score tests for treatment effect in the SPCD. We give formulae for asymptotic power and for sample size computations and evaluate their accuracy via simulation studies. We compute the optimal allocation ratio between drug and placebo in stage 1 for the SPCD to determine from a theoretical viewpoint whether a single-stage design, a two-stage design with placebo only in the first stage, or a two-stage design is the best design for a given set of response rates. As response rates are not known before the trial, a two-stage approach with allocation to active drug in both stages is a robust design choice. Copyright © 2011 John Wiley & Sons, Ltd.
THTM: A template matching algorithm based on HOG descriptor and two-stage matching
NASA Astrophysics Data System (ADS)
Jiang, Yuanjie; Ruan, Li; Xiao, Limin; Liu, Xi; Yuan, Feng; Wang, Haitao
2018-04-01
We propose a novel method for template matching named THTM - a template matching algorithm based on HOG (histogram of gradient) and two-stage matching. We rely on the fast construction of HOG and the two-stage matching that jointly lead to a high accuracy approach for matching. TMTM give enough attention on HOG and creatively propose a twice-stage matching while traditional method only matches once. Our contribution is to apply HOG to template matching successfully and present two-stage matching, which is prominent to improve the matching accuracy based on HOG descriptor. We analyze key features of THTM and perform compared to other commonly used alternatives on a challenging real-world datasets. Experiments show that our method outperforms the comparison method.
Gravinese, Philip M.; Flannery, Jennifer A.; Toth, Lauren T.
2016-11-23
The larvae of the Florida stone crab, Menippe mercenaria, migrate through a variety of habitats as they develop and, therefore, experience a broad range of environmental conditions through ontogeny. Environmental variability experienced by the larvae may result in distinct elemental signatures within the exoskeletons, which could provide a tool for tracking the environmental history of larval stone crab populations. A method was developed to examine trace-element ratios, specifically magnesium-to-calcium (Mg/Ca) and strontium-to-calcium (Sr/Ca) ratios, in the exoskeletons of M. mercenaria larvae. Two developmental stages of stone crab larvae were analyzed—stage III and stage V. Specimens were reared in a laboratory environment under stable conditions to quantify the average ratios of Mg/Ca and Sr/Ca of larval stone crab exoskeletons and to determine if the ratios differed through ontogeny. The elemental compositions (Ca, Mg, and Sr) in samples of stage III larvae (n = 50 per sample) from 11 different broods (mean Sr/Ca = 5.916 ± 0.161 millimole per mole [mmol mol−1]; mean Mg/Ca = 218.275 ± 59.957 mmol mol−1) and stage V larvae (n = 10 per sample) from 12 different broods (mean Sr/Ca = 6.110 ± 0.300 mmol mol−1; mean Mg/Ca = 267.081 ± 67.211 mmol mol–1) were measured using inductively coupled plasma optical emission spectrometry (ICP–OES). The ratio of Sr/Ca significantly increased from stage III to stage V larvae, suggesting an ontogenic shift in Sr/Ca ratios between larval stages. The ratio of Mg/Ca did not change significantly between larval stages, but variability among broods was high. The method used to examine the trace-element ratios provided robust, highly reproducible estimates of Sr/Ca and Mg/Ca ratios in the larvae of M. mercenaria, demonstrating that ICP–OES can be used to determine the trace-element composition of chitinous organisms like the Florida stone crab.
Relationship between dental calcification and skeletal maturation in a Peruvian sample.
Lecca-Morales, Rocío M; Carruitero, Marcos J
2017-01-01
the objective of the study was to determine the relationship between dental calcification stages and skeletal maturation in a Peruvian sample. panoramic, cephalometric and carpal radiographs of 78 patients (34 girls and 44 boys) between 7 and 17 years old (9.90 ± 2.5 years) were evaluated. Stages of tooth calcification of the mandibular canine, first premolar, second premolar, and second molar and the skeletal maturation with a hand-wrist and a cervical vertebrae method were assessed. The relationships between the stages were assessed using Spearman's correlation coefficient. Additionally, the associations of mandibular and pubertal growth peak stages with tooth calcification were evaluated by Fisher's exact test. all teeth showed positive and statistically significant correlations, the highest correlation was between the mandibular second molar calcification stages with hand-wrist maturation stages (r = 0.758, p < 0.001) and with vertebrae cervical maturation stages (r = 0.605, p < 0.001). The pubertal growth spurt was found in the G stage of calcification of the second mandibular molar, and the mandibular growth peak was found in the F stage of calcification of the second molar. there was a positive relationship between dental calcification stages and skeletal maturation stages by hand-wrist and cervical vertebrae methods in the sample studied. Dental calcification stages of the second mandibular molar showed the highest positive correlation with the hand-wrist and cervical vertebrae stages.
Residents Living in Residential Care Facilities: United States, 2010
... NSRCF used a stratified two-stage probability sample design. The first stage was the selection of RCFs ... was 99%. A detailed description of NSRCF sampling design, data collection, and procedures is provided both in ...
NASA Astrophysics Data System (ADS)
Powell, P. E.
Educators have recently come to consider inquiry based instruction as a more effective method of instruction than didactic instruction. Experience based learning theory suggests that student performance is linked to teaching method. However, research is limited on inquiry teaching and its effectiveness on preparing students to perform well on standardized tests. The purpose of the study to investigate whether one of these two teaching methodologies was more effective in increasing student performance on standardized science tests. The quasi experimental quantitative study was comprised of two stages. Stage 1 used a survey to identify teaching methods of a convenience sample of 57 teacher participants and determined level of inquiry used in instruction to place participants into instructional groups (the independent variable). Stage 2 used analysis of covariance (ANCOVA) to compare posttest scores on a standardized exam by teaching method. Additional analyses were conducted to examine the differences in science achievement by ethnicity, gender, and socioeconomic status by teaching methodology. Results demonstrated a statistically significant gain in test scores when taught using inquiry based instruction. Subpopulation analyses indicated all groups showed improved mean standardized test scores except African American students. The findings benefit teachers and students by presenting data supporting a method of content delivery that increases teacher efficacy and produces students with a greater cognition of science content that meets the school's mission and goals.
Murcia-Aranguren, Martha I; Gómez-Marin, Jorge E; Alvarado, Fernando S; Bustillo, José G; de Mendivelson, Ellen; Gómez, Bertha; León, Clara I; Triana, William A; Vargas, Erwing A; Rodríguez, Edgar
2001-01-01
Background The prevalence of infections by Mycobacterium tuberculosis and non-tuberculous Mycobacterium species in the HIV-infected patient population in Colombia was uncertain despite some pilot studies. We determined the frequency of isolation of Mycobacterium tuberculosis and of non-tuberculous Mycobacterium species in diverse body fluids of HIV-infected patients in Bogota, Colombia. Methods Patients who attended the three major HIV/AIDS healthcare centres in Bogota were prospectively studied over a six month period. A total of 286 patients were enrolled, 20% of them were hospitalized at some point during the study. Sixty four percent (64%) were classified as stage C, 25% as stage B, and 11% as stage A (CDC staging system, 1993). A total of 1,622 clinical samples (mostly paired samples of blood, sputum, stool, and urine) were processed for acid-fast bacilli (AFB) stain and culture. Results Overall 43 of 1,622 cultures (2.6%) were positive for mycobacteria. Twenty-two sputum samples were positive. Four patients were diagnosed with M. tuberculosis (1.4%). All isolates of M. tuberculosis were sensitive to common anti-tuberculous drugs. M. avium was isolated in thirteen patients (4.5%), but only in three of them the cultures originated from blood. The other isolates were obtained from stool, urine or sputum samples. In three cases, direct AFB smears of blood were positive. Two patients presented simultaneously with M. tuberculosis and M. avium. Conclusions Non-tuberculous Mycobacterium infections are frequent in HIV infected patients in Bogota. The diagnostic sensitivity for infection with tuberculous and non-tuberculous mycobacteria can be increased when diverse body fluids are processed from each patient. PMID:11722797
Aguirre, C; Olivares, N; Luppichini, P; Hinrichsen, P
2015-02-01
A PCR-based method was developed to identify Naupactus cervinus (Boheman) and Naupactus xanthographus (Germar), two curculionids affecting the citrus industry in Chile. The quarantine status of these two species depends on the country to which fruits are exported. This identification method was developed because it is not possible to discriminate between these two species at the egg stage. The method is based on the species-specific amplification of sequences of internal transcribed spacers, for which we cloned and sequenced these genome fragments from each species. We designed an identification system based on two duplex-PCR reactions. Each one contains the species-specific primer set and a second generic primer set that amplify a short 18S region common to coleopterans, to avoid false negatives. The marker system is able to differentiate each Naupactus species at any life stage, and with a diagnostic sensitivity to 0.045 ng of genomic DNA. This PCR kit was validated by samples collected from different citrus production areas throughout Chile and showed 100% accuracy in differentiating the two Naupactus species. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Comparison of variance estimators for meta-analysis of instrumental variable estimates
Schmidt, AF; Hingorani, AD; Jefferis, BJ; White, J; Groenwold, RHH; Dudbridge, F
2016-01-01
Abstract Background: Mendelian randomization studies perform instrumental variable (IV) analysis using genetic IVs. Results of individual Mendelian randomization studies can be pooled through meta-analysis. We explored how different variance estimators influence the meta-analysed IV estimate. Methods: Two versions of the delta method (IV before or after pooling), four bootstrap estimators, a jack-knife estimator and a heteroscedasticity-consistent (HC) variance estimator were compared using simulation. Two types of meta-analyses were compared, a two-stage meta-analysis pooling results, and a one-stage meta-analysis pooling datasets. Results: Using a two-stage meta-analysis, coverage of the point estimate using bootstrapped estimators deviated from nominal levels at weak instrument settings and/or outcome probabilities ≤ 0.10. The jack-knife estimator was the least biased resampling method, the HC estimator often failed at outcome probabilities ≤ 0.50 and overall the delta method estimators were the least biased. In the presence of between-study heterogeneity, the delta method before meta-analysis performed best. Using a one-stage meta-analysis all methods performed equally well and better than two-stage meta-analysis of greater or equal size. Conclusions: In the presence of between-study heterogeneity, two-stage meta-analyses should preferentially use the delta method before meta-analysis. Weak instrument bias can be reduced by performing a one-stage meta-analysis. PMID:27591262
Comparison of the King's and MiToS staging systems for ALS.
Fang, Ton; Al Khleifat, Ahmad; Stahl, Daniel R; Lazo La Torre, Claudia; Murphy, Caroline; Young, Carolyn; Shaw, Pamela J; Leigh, P Nigel; Al-Chalabi, Ammar
2017-05-01
To investigate and compare two ALS staging systems, King's clinical staging and Milano-Torino (MiToS) functional staging, using data from the LiCALS phase III clinical trial (EudraCT 2008-006891-31). Disease stage was derived retrospectively for each system from the ALS Functional Rating Scale-Revised subscores using standard methods. The two staging methods were then compared for timing of stages using box plots, correspondence using chi-square tests, agreement using a linearly weighted kappa coefficient and concordance using Spearman's rank correlation. For both systems, progressively higher stages occurred at progressively later proportions of the disease course, but the distribution differed between the two methods. King's stage 3 corresponded to MiToS stage 1 most frequently, with earlier King's stages 1 and 2 largely corresponding to MiToS stage 0 or 1. The Spearman correlation was 0.54. There was fair agreement between the two systems with kappa coefficient of 0.21. The distribution of timings shows that the two systems are complementary, with King's staging showing greatest resolution in early to mid-disease corresponding to clinical or disease burden, and MiToS staging having higher resolution for late disease, corresponding to functional involvement. We therefore propose using both staging systems when describing ALS.
Assessing Compliance-Effect Bias in the Two Stage Least Squares Estimator
ERIC Educational Resources Information Center
Reardon, Sean; Unlu, Fatih; Zhu, Pei; Bloom, Howard
2011-01-01
The proposed paper studies the bias in the two-stage least squares, or 2SLS, estimator that is caused by the compliance-effect covariance (hereafter, the compliance-effect bias). It starts by deriving the formula for the bias in an infinite sample (i.e., in the absence of finite sample bias) under different circumstances. Specifically, it…
Molins, Claudia R.; Sexton, Christopher; Young, John W.; Ashton, Laura V.; Pappert, Ryan; Beard, Charles B.
2014-01-01
Serological assays and a two-tiered test algorithm are recommended for laboratory confirmation of Lyme disease. In the United States, the sensitivity of two-tiered testing using commercially available serology-based assays is dependent on the stage of infection and ranges from 30% in the early localized disease stage to near 100% in late-stage disease. Other variables, including subjectivity in reading Western blots, compliance with two-tiered recommendations, use of different first- and second-tier test combinations, and use of different test samples, all contribute to variation in two-tiered test performance. The availability and use of sample sets from well-characterized Lyme disease patients and controls are needed to better assess the performance of existing tests and for development of improved assays. To address this need, the Centers for Disease Control and Prevention and the National Institutes of Health prospectively collected sera from patients at all stages of Lyme disease, as well as healthy donors and patients with look-alike diseases. Patients and healthy controls were recruited using strict inclusion and exclusion criteria. Samples from all included patients were retrospectively characterized by two-tiered testing. The results from two-tiered testing corroborated the need for novel and improved diagnostics, particularly for laboratory diagnosis of earlier stages of infection. Furthermore, the two-tiered results provide a baseline with samples from well-characterized patients that can be used in comparing the sensitivity and specificity of novel diagnostics. Panels of sera and accompanying clinical and laboratory testing results are now available to Lyme disease serological test users and researchers developing novel tests. PMID:25122862
ERIC Educational Resources Information Center
Sternberg, Kathleen J.; Lamb, Michael E.; Guterman, Eva; Abbott, Craig B.
2006-01-01
Objectives: To examine the effects of different forms of family violence at two developmental stages by assessing a sample of 110 Israeli children, drawn from the case files of Israeli family service agencies, studied longitudinally in both middle childhood and adolescence. Methods: Information about the children's adjustment was obtained from…
Calibrationless parallel magnetic resonance imaging: a joint sparsity model.
Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab
2013-12-05
State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.
Drying characteristics of whole Musa AA group ‘Kluai Leb Mu Nang’ using hot air and infrared vacuum
NASA Astrophysics Data System (ADS)
Kulketwong, C.; Thungsotanon, D.; Suwanpayak, N.
2017-06-01
Dried Musa AA group ‘Kluai Leb Mu Nang’ are the famous processing goods of Chumphon province, the south of Thailand. In this paper, we improved the qualities of whole Musa AA group ‘Kluai leb Mu Nang’ by using the hot air and infrared vacuum (HA and infrared vacuum) drying method which has two stages. The first stage of the method is the hot air (HA) and hot air-infrared (HAI) drying for rapidly reducing the moisture content and the drying times at atmospheric pressure, and the second stage, the moisture content, and color of the samples can be controlled by the HA and infrared vacuum drying. The experiment was evaluated by the terms of firmness, color change, moisture content, vacuum pressure and energy consumption at various temperatures. The results were found that the suitable temperature of the HAI and HA and infrared vacuum drying stages at 70°C and 55°C, respectively, while the suitable vacuum pressure in the second process was -0.4 bar. The samples were dried in a total of 28 hrs using 13.83 MJ/kg of specific energy consumption (stage 1 with 8.8 MJ/kg and stage 2 of 5.03 MJ/kg). The physical characteristics of the 21% (wb) of dried bananas can be measured the color change, L*=38.56, a*=16.47 and b*=16.3, was approximate the goods from the local market, whereas the firmness of them was more tender and shown a value of 849.56 kN/m3.
Girardoz, S; Tomov, R; Eschen, R; Quicke, D L J; Kenis, M
2007-10-01
The horse-chestnut leaf miner, Cameraria ohridella, is an invasive alien species defoliating horse-chestnut, a popular ornamental tree in Europe. This paper presents quantitative data on mortality factors affecting larvae and pupae of the leaf miner in Switzerland and Bulgaria, both in urban and forest environments. Two sampling methods were used and compared: a cohort method, consisting of the surveying of pre-selected mines throughout their development, and a grab sampling method, consisting of single sets of leaves collected and dissected at regular intervals. The total mortality per generation varied between 14 and 99%. Mortality was caused by a variety of factors, including parasitism, host feeding, predation by birds and arthropods, plant defence reaction, leaf senescence, intra-specific competition and inter-specific competition with a fungal disease. Significant interactions were found between mortality factors and sampling methods, countries, environments and generation. No mortality factor was dominant throughout the sites, generations and methods tested. Plant defence reactions constituted the main mortality factor for the first two larval stages, whereas predation by birds and arthropods and parasitism were more important in older larvae and pupae. Mortality caused by leaf senescence was often the dominant mortality factor in the last annual generation. The cohort method detected higher mortality rates than the grab sampling method. In particular, mortality by plant defence reaction and leaf senescence were better assessed using the cohort method, which is, therefore, recommended for life table studies on leaf miners.
NASA Astrophysics Data System (ADS)
Jia, Bing; Wei, Jian-Ping; Wen, Zhi-Hui; Wang, Yun-Gang; Jia, Lin-Xing
2017-11-01
In order to study the response characteristics of infrasound in coal samples under the uniaxial loading process, coal samples were collected from GengCun mine. Coal rock stress loading device, acoustic emission tested system and infrasound tested system were used to test the infrasonic signal and acoustic emission signal under uniaxial loading process. The tested results were analyzed by the methods of wavelet filter, threshold denoise, time-frequency analysis and so on. The results showed that in the loading process, the change of the infrasonic wave displayed the characteristics of stage, and it could be divided into three stages: initial stage with a certain amount infrasound events, middle stage with few infrasound events, and late stage gradual decrease. It had a good consistency with changing characteristics of acoustic emission. At the same time, the frequency of infrasound was very low. It can propagate over a very long distance with little attenuation, and the characteristics of the infrasound before the destruction of the coal samples were obvious. A method of using the infrasound characteristics to predict the destruction of coal samples was proposed. This is of great significance to guide the prediction of geological hazards in coal mines.
Wiele, Stephen M.; Torizzo, Margaret
2003-01-01
A method was developed to construct stage-discharge rating curves for the Colorado River in Grand Canyon, Arizona, using two stage-discharge pairs and a stage-normalized rating curve. Stage-discharge rating curves formulated with the stage-normalized curve method are compared to (1) stage-discharge rating curves for six temporary stage gages and two streamflow-gaging stations developed by combining stage records with modeled unsteady flow; (2) stage-discharge rating curves developed from stage records and discharge measurements at three streamflow-gaging stations; and (3) stages surveyed at known discharges at the Northern Arizona Sand Bar Studies sites. The stage-normalized curve method shows good agreement with field data when the discharges used in the construction of the rating curves are at least 200 cubic meters per second apart. Predictions of stage using the stage-normalized curve method are also compared to predictions of stage from a steady-flow model.
Gholami-Motlagh, Farzaneh; Jouzi, Mina; Soleymani, Bahram
2016-01-01
Background: Anxiety is an inseparable part of our lives and a serious threat to health. Therefore, it is necessary to use certain strategies to prevent disorders caused by anxiety and adjust the vital signs of people. Swedish massage is one of the most recognized techniques for reducing anxiety. This study aims to compare the effects of two massage techniques on the vital signs and anxiety of healthy women. Materials and Methods: This quasi-experimental study with a two-group, crossover design was conducted on 20 healthy women who were selected by simple sampling method and were randomly assigned to BNC (Back, Neck, and Chest) or LAF (Leg, Arm, and Face) groups. Massage therapy was carried out for a 14-week period (two 4-week massage therapy sessions and 6 weeks washout stage). Gathered data were analyzed using paired t-test with a significance level of P < 0.05. Results: Both BNC and LAF methods caused a significant decrease in systolic BP in the first stage (P = 0.02, 0.00); however, diastolic BP showed significant decrease only in BNC group (P = 0.01). The mean average of body temperature of LAF group showed a significant decrease in the first stage (P = 0.0.3), and pulse and respiratory rate showed significant decrease in both groups during the second stage (P = 0.00). In addition, anxiety scores showed no significant difference before and after massage therapy (P < 0.05). Conclusions: Massage therapy caused a decrease in systolic BP, pulse, and respiratory rate. It can be concluded that massage therapy was useful for decreasing the vital signs associated with anxiety in healthy women. PMID:27563325
Hunt, Alison C; Ek, Mattias; Schönbächler, Maria
2017-12-01
This study presents a new measurement procedure for the isolation of Pt from iron meteorite samples. The method also allows for the separation of Pd from the same sample aliquot. The separation entails a two-stage anion-exchange procedure. In the first stage, Pt and Pd are separated from each other and from major matrix constituents including Fe and Ni. In the second stage, Ir is reduced with ascorbic acid and eluted from the column before Pt collection. Platinum yields for the total procedure were typically 50-70%. After purification, high-precision Pt isotope determinations were performed by multi-collector ICP-MS. The precision of the new method was assessed using the IIAB iron meteorite North Chile. Replicate analyses of multiple digestions of this material yielded an intermediate precision for the measurement results of 0.73 for ε 192 Pt, 0.15 for ε 194 Pt and 0.09 for ε 196 Pt (2 standard deviations). The NIST SRM 3140 Pt solution reference material was passed through the measurement procedure and yielded an isotopic composition that is identical to the unprocessed Pt reference material. This indicates that the new technique is unbiased within the limit of the estimated uncertainties. Data for three iron meteorites support that Pt isotope variations in these samples are due to exposure to galactic cosmic rays in space.
Screening and staging for non-small cell lung cancer by serum laser Raman spectroscopy.
Wang, Hong; Zhang, Shaohong; Wan, Limei; Sun, Hong; Tan, Jie; Su, Qiucheng
2018-08-05
Lung cancer is the leading cause of cancer-related death worldwide. Current clinical screening methods to detect lung cancer are expensive and associated with many complications. Raman spectroscopy is a spectroscopic technique that offers a convenient method to gain molecular information about biological samples. In this study, we measured the serum Raman spectral intensity of healthy volunteers and patients with different stages of non-small cell lung cancer. The purpose of this study was to evaluate the application of serum laser Raman spectroscopy as a low cost alternative method in the screening and staging of non-small cell lung cancer (NSCLC). The Raman spectra of the sera of peripheral venous blood were measured with a LabRAM HR 800 confocal Micro Raman spectrometer for individuals from five groups including 14 healthy volunteers (control group), 23 patients with stage I NSCLC (stage I group), 24 patients with stage II NSCLC (stage II group), 19 patients with stage III NSCLC (stage III group), 11 patients with stage IV NSCLC (stage IV group). Each serum sample was measured 3 times at different spots and the average spectra represented the signal of Raman spectra in each case. The Raman spectrum signal data of the five groups were statistically analyzed by analysis of variance (ANOVA), principal component analysis (PCA), linear discriminant analysis (LDA), and cross-validation. Raman spectral intensity was sequentially reduced in serum samples from control group, stage I group, stage II group and stage III/IV group. The strongest peak intensity was observed in the control group, and the weakest one was found in the stage III/IV group at bands of 848 cm -1 , 999 cm -1 , 1152 cm -1 , 1446 cm -1 and 1658 cm -1 (P < 0.05). Linear discriminant analysis showed that the sensitivity to identify healthy people, stage I, stage II, and stage III/IV NSCLC was 86%, 65%, 75%, and 87%, respectively; the specificity was 95%, 94%, 88%, and 93%, respectively; and the overall accuracy rate was 92% (71/77). The laser Raman spectroscopy can effectively identify patients with stage I, stage II or stage III/IV Non-Small Cell Lung cancer using patient serum samples. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Murphy, James; Jones, Phil; Hill, Steve J.
1996-12-01
A simple and accurate method has been developed for the determination of total mercury in environmental and biological samples. The method utilises an off-line microwave digestion stage followed by analysis using a flow injection system with detection by cold vapour atomic absorption spectrometry. The method has been validated using two certified reference materials (DORM-1 dogfish and MESS-2 estuarine sediment) and the results agreed well with the certified values. A detection limit of 0.2 ng g -1 Hg was obtained and no significant interference was observed. The method was finally applied to the determination of mercury in river sediments and canned tuna fish, and gave results in the range 0.1-3.0 mg kg -1.
Pan-sharpening via compressed superresolution reconstruction and multidictionary learning
NASA Astrophysics Data System (ADS)
Shi, Cheng; Liu, Fang; Li, Lingling; Jiao, Licheng; Hao, Hongxia; Shang, Ronghua; Li, Yangyang
2018-01-01
In recent compressed sensing (CS)-based pan-sharpening algorithms, pan-sharpening performance is affected by two key problems. One is that there are always errors between the high-resolution panchromatic (HRP) image and the linear weighted high-resolution multispectral (HRM) image, resulting in spatial and spectral information lost. The other is that the dictionary construction process depends on the nontruth training samples. These problems have limited applications to CS-based pan-sharpening algorithm. To solve these two problems, we propose a pan-sharpening algorithm via compressed superresolution reconstruction and multidictionary learning. Through a two-stage implementation, compressed superresolution reconstruction model reduces the error effectively between the HRP and the linear weighted HRM images. Meanwhile, the multidictionary with ridgelet and curvelet is learned for both the two stages in the superresolution reconstruction process. Since ridgelet and curvelet can better capture the structure and directional characteristics, a better reconstruction result can be obtained. Experiments are done on the QuickBird and IKONOS satellites images. The results indicate that the proposed algorithm is competitive compared with the recent CS-based pan-sharpening methods and other well-known methods.
A two-stage method for microcalcification cluster segmentation in mammography by deformable models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arikidis, N.; Kazantzi, A.; Skiadopoulos, S.
Purpose: Segmentation of microcalcification (MC) clusters in x-ray mammography is a difficult task for radiologists. Accurate segmentation is prerequisite for quantitative image analysis of MC clusters and subsequent feature extraction and classification in computer-aided diagnosis schemes. Methods: In this study, a two-stage semiautomated segmentation method of MC clusters is investigated. The first stage is targeted to accurate and time efficient segmentation of the majority of the particles of a MC cluster, by means of a level set method. The second stage is targeted to shape refinement of selected individual MCs, by means of an active contour model. Both methods aremore » applied in the framework of a rich scale-space representation, provided by the wavelet transform at integer scales. Segmentation reliability of the proposed method in terms of inter and intraobserver agreements was evaluated in a case sample of 80 MC clusters originating from the digital database for screening mammography, corresponding to 4 morphology types (punctate: 22, fine linear branching: 16, pleomorphic: 18, and amorphous: 24) of MC clusters, assessing radiologists’ segmentations quantitatively by two distance metrics (Hausdorff distance—HDIST{sub cluster}, average of minimum distance—AMINDIST{sub cluster}) and the area overlap measure (AOM{sub cluster}). The effect of the proposed segmentation method on MC cluster characterization accuracy was evaluated in a case sample of 162 pleomorphic MC clusters (72 malignant and 90 benign). Ten MC cluster features, targeted to capture morphologic properties of individual MCs in a cluster (area, major length, perimeter, compactness, and spread), were extracted and a correlation-based feature selection method yielded a feature subset to feed in a support vector machine classifier. Classification performance of the MC cluster features was estimated by means of the area under receiver operating characteristic curve (Az ± Standard Error) utilizing tenfold cross-validation methodology. A previously developed B-spline active rays segmentation method was also considered for comparison purposes. Results: Interobserver and intraobserver segmentation agreements (median and [25%, 75%] quartile range) were substantial with respect to the distance metrics HDIST{sub cluster} (2.3 [1.8, 2.9] and 2.5 [2.1, 3.2] pixels) and AMINDIST{sub cluster} (0.8 [0.6, 1.0] and 1.0 [0.8, 1.2] pixels), while moderate with respect to AOM{sub cluster} (0.64 [0.55, 0.71] and 0.59 [0.52, 0.66]). The proposed segmentation method outperformed (0.80 ± 0.04) statistically significantly (Mann-Whitney U-test, p < 0.05) the B-spline active rays segmentation method (0.69 ± 0.04), suggesting the significance of the proposed semiautomated method. Conclusions: Results indicate a reliable semiautomated segmentation method for MC clusters offered by deformable models, which could be utilized in MC cluster quantitative image analysis.« less
Sengur, Abdulkadir
2008-03-01
In the last two decades, the use of artificial intelligence methods in medical analysis is increasing. This is mainly because the effectiveness of classification and detection systems have improved a great deal to help the medical experts in diagnosing. In this work, we investigate the use of principal component analysis (PCA), artificial immune system (AIS) and fuzzy k-NN to determine the normal and abnormal heart valves from the Doppler heart sounds. The proposed heart valve disorder detection system is composed of three stages. The first stage is the pre-processing stage. Filtering, normalization and white de-noising are the processes that were used in this stage. The feature extraction is the second stage. During feature extraction stage, wavelet packet decomposition was used. As a next step, wavelet entropy was considered as features. For reducing the complexity of the system, PCA was used for feature reduction. In the classification stage, AIS and fuzzy k-NN were used. To evaluate the performance of the proposed methodology, a comparative study is realized by using a data set containing 215 samples. The validation of the proposed method is measured by using the sensitivity and specificity parameters; 95.9% sensitivity and 96% specificity rate was obtained.
2010-01-01
Background Lymphadenectomy is an integral part of the staging system of epithelial ovarian cancer. However, the extent of lymphadenectomy in the early stages of ovarian cancer is controversial. The objective of this study was to identify the lymph node involvement in unilateral epithelial ovarian cancer apparently confined to the one ovary (clinical stage Ia). Methods A prospective study of clinical stage I ovarian cancer patients is presented. Patient's characteristics and tumor histopathology were the variables evaluated. Results Thirty three ovarian cancer patients with intact ovarian capsule were evaluated. Intraoperatively, neither of the patients had surface involvement, adhesions, ascites or palpable lymph nodes (supposed to be clinical stage Ia). The mean age of the study group was 55.3 ± 11.8. All patients were surgically staged and have undergone a systematic pelvic and paraaortic lymphadenectomy. Final surgicopathologic reports revealed capsular involvement in seven patients (21.2%), contralateral ovarian involvement in two (6%) and omental metastasis in one (3%) patient. There were two patients (6%) with lymph node involvement. One of the two lymph node metastasis was solely in paraaortic node and the other metastasis was in ipsilateral pelvic lymph node. Ovarian capsule was intact in all of the patients with lymph node involvement and the tumor was grade 3. Conclusion In clinical stage Ia ovarian cancer patients, there may be a risk of paraaortic and pelvic lymph node metastasis. Further studies with larger sample size are needed for an exact conclusion. PMID:21114870
Algorithm for Lossless Compression of Calibrated Hyperspectral Imagery
NASA Technical Reports Server (NTRS)
Kiely, Aaron B.; Klimesh, Matthew A.
2010-01-01
A two-stage predictive method was developed for lossless compression of calibrated hyperspectral imagery. The first prediction stage uses a conventional linear predictor intended to exploit spatial and/or spectral dependencies in the data. The compressor tabulates counts of the past values of the difference between this initial prediction and the actual sample value. To form the ultimate predicted value, in the second stage, these counts are combined with an adaptively updated weight function intended to capture information about data regularities introduced by the calibration process. Finally, prediction residuals are losslessly encoded using adaptive arithmetic coding. Algorithms of this type are commonly tested on a readily available collection of images from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral imager. On the standard calibrated AVIRIS hyperspectral images that are most widely used for compression benchmarking, the new compressor provides more than 0.5 bits/sample improvement over the previous best compression results. The algorithm has been implemented in Mathematica. The compression algorithm was demonstrated as beneficial on 12-bit calibrated AVIRIS images.
NASA Astrophysics Data System (ADS)
Shojaei Zoeram, Ali; Rahmani, Aida; Asghar Akbari Mousavi, Seyed Ali
2017-05-01
The precise controllability of heat input in pulsed Nd:YAG welding method provided by two additional parameters, frequency and pulse duration, has made this method very promising for welding of alloys sensitive to heat input. The poor weldability of Ti-rich nitinol as a result of the formation of Ti2Ni IMC has deprived us of the unique properties of this alloy. In this study, to intensify solidification rate during welding of Ti-rich nitinol, pulsed Nd:YAG laser beam in low frequency was employed in addition to the employment of a copper substrate. Specific microstructure produced in this condition was characterized and the effects of this microstructure on tensile and fracture behavior of samples welded by two different procedures, full penetration and double-sided method with halved penetration depth for each side were investigated. The investigations revealed although the combination of low frequencies, the use of a high thermal conductor substrate and double-sided method eliminated intergranular fracture and increased tensile strength, the particular microstructure, built in the pulsed welding method in low frequencies, results to the formation of the longitudinal cracks during the first stages of tensile test at weld centerline. This degrades tensile strength of welded samples compared to base metal. The results showed samples welded in double-sided method performed much better than samples welded in full penetration mode.
Soave, David; Sun, Lei
2017-09-01
We generalize Levene's test for variance (scale) heterogeneity between k groups for more complex data, when there are sample correlation and group membership uncertainty. Following a two-stage regression framework, we show that least absolute deviation regression must be used in the stage 1 analysis to ensure a correct asymptotic χk-12/(k-1) distribution of the generalized scale (gS) test statistic. We then show that the proposed gS test is independent of the generalized location test, under the joint null hypothesis of no mean and no variance heterogeneity. Consequently, we generalize the recently proposed joint location-scale (gJLS) test, valuable in settings where there is an interaction effect but one interacting variable is not available. We evaluate the proposed method via an extensive simulation study and two genetic association application studies. © 2017 The Authors Biometrics published by Wiley Periodicals, Inc. on behalf of International Biometric Society.
Salem, Nidhal; Msaada, Kamel; Elkahoui, Salem; Mangano, Giuseppe; Azaeiz, Sana; Ben Slimen, Imen; Kefi, Sarra; Pintore, Giorgio; Limam, Ferid; Marzouk, Brahim
2014-01-01
Two Carthamus tinctorius varieties (Jawhara and 104) were studied in order to investigate their natural dyes contents and biological activities. Obtained results showed that quinochalcone contents and antioxidant activities varied considerably as function of flowering stages. So flowers at fructification stage contained the highest carthamin content with the strongest antioxidant capacity with all assays (FRAP, DPPH, and chelating power methods). In parallel, we showed a decrease in the content of precarthamin. The quantitative variation of these molecules could be due to colour change of C. tinctorius flowers. Correlation analysis indicated that the ABTS method showed the highest correlation coefficients with carthamin and precarthamin contents, that is, 0.886 and 0.973, respectively. Concerning the regional effect, the contents of precarthamin and carthamin varied significantly (P < 0.05) at studied regions with the optimum production given by samples of Beja (902.41 μg/g DW and 42.05 μg/g DW, respectively, at flowering stage). During flowering, the antimicrobial activity of these two natural dyes increased where the maximum inhibitory effect mentioned with carthamin mainly against E. coli (iz = 25.89 mm) at fructification stage. Therefore, the increased frequency of resistance to commonly used antibiotics leads to the search for new effective natural drugs at food and pharmaceutical industries.
Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis
Jamshidy, Ladan; Faraji, Payam; Sharifi, Roohollah
2016-01-01
Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique. PMID:28003824
Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.
Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah
2016-01-01
Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.
Influence of crystal habit on the compression and densification mechanism of ibuprofen
NASA Astrophysics Data System (ADS)
Di Martino, Piera; Beccerica, Moira; Joiris, Etienne; Palmieri, Giovanni F.; Gayot, Anne; Martelli, Sante
2002-08-01
Ibuprofen was recrystallized from several solvents by two different methods: addition of a non-solvent to a drug solution and cooling of a drug solution. Four samples, characterized by different crystal habit, were selected: sample A, sample E and sample T, recrystallized respectively from acetone, ethanol and THF by addition of water as non-solvent and sample M recrystallized from methanol by temperature decrease. By SEM analysis, sample were characterized with the respect of their crystal habit, mean particle diameter and elongation ratio. Sample A appears stick-shaped, sample E acicular with lamellar characteristics, samples T and M polyhedral. DSC and X-ray diffraction studies permit to exclude a polymorphic modification of ibuprofen during crystallization. For all samples micromeritics properties, densification behaviour and compression ability was analysed. Sample M shows a higher densification tendency, evidenciated by its higher apparent and tapped particle density. The ability to densificate is also pointed out by D0' value of Heckel's plot, which indicate the rearrangement of original particles at the initial stage of compression. This fact is related to the crystal habit of sample M, which is characterized by strongly smoothed coins. The increase in powder bed porosity permits a particle-particle interaction of greater extent during the subsequent stage of compression, which allows higher tabletability and compressibility.
NASA Astrophysics Data System (ADS)
Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.
2018-02-01
Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days.
NASA Astrophysics Data System (ADS)
Malys, Brian J.; Piotrowski, Michelle L.; Owens, Kevin G.
2017-12-01
Frustrated by worse than expected error for both peak area and time-of-flight (TOF) in matrix assisted laser desorption ionization (MALDI) experiments using samples prepared by electrospray deposition, it was finally determined that there was a correlation between sample location on the target plate and the measured TOF/peak area. Variations in both TOF and peak area were found to be due to small differences in the initial position of ions formed in the source region of the TOF mass spectrometer. These differences arise largely from misalignment of the instrument sample stage, with a smaller contribution arising from the non-ideal shape of the target plates used. By physically measuring the target plates used and comparing TOF data collected from three different instruments, an estimate of the magnitude and direction of the sample stage misalignment was determined for each of the instruments. A correction method was developed to correct the TOFs and peak areas obtained for a given combination of target plate and instrument. Two correction factors are determined, one by initially collecting spectra from each sample position used and another by using spectra from a single position for each set of samples on a target plate. For TOF and mass values, use of the correction factor reduced the error by a factor of 4, with the relative standard deviation (RSD) of the corrected masses being reduced to 12-24 ppm. For the peak areas, the RSD was reduced from 28% to 16% for samples deposited twice onto two target plates over two days. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Wang, Ji; Zhang, Ru; Yan, Yuting; Dong, Xiaoqiang; Li, Jun Ming
2017-05-01
Hazardous gas leaks in the atmosphere can cause significant economic losses in addition to environmental hazards, such as fires and explosions. A three-stage hazardous gas leak source localization method was developed that uses movable and stationary gas concentration sensors. The method calculates a preliminary source inversion with a modified genetic algorithm (MGA) and has the potential to crossover with eliminated individuals from the population, following the selection of the best candidate. The method then determines a search zone using Markov Chain Monte Carlo (MCMC) sampling, utilizing a partial evaluation strategy. The leak source is then accurately localized using a modified guaranteed convergence particle swarm optimization algorithm with several bad-performing individuals, following selection of the most successful individual with dynamic updates. The first two stages are based on data collected by motionless sensors, and the last stage is based on data from movable robots with sensors. The measurement error adaptability and the effect of the leak source location were analyzed. The test results showed that this three-stage localization process can localize a leak source within 1.0 m of the source for different leak source locations, with measurement error standard deviation smaller than 2.0.
Montasser, Mona A; Viana, Grace; Evans, Carla A
2017-04-01
To investigate the presence of secular trends in skeletal maturation of girls and boys as assessed by the use of cervical vertebrae bones. The study compared two main groups: the first included data collected from the Denver growth study (1930s to 1960s) and the second included data collected from recent pretreatment records (1980s to 2010s) of patients from the orthodontic clinic of a North American University. The records from the two groups were all for Caucasian subjects. The sample for each group included 78 lateral cephalographs for girls and the same number for boys. The age of the subjects ranged from 7 to 18 years. Cervical vertebrae maturation (CVM) stages were directly assessed from the radiographs according to the method described by Hassel and Farman in which six CVM stages were designated from cervical vertebrae 2, 3, and 4. The mean age of girls from the Denver growth study and girls from the university clinic in each of the six CVM stages was not different at P ≤0.05. However, the mean age of boys from the two groups was not different only in stage 3 (P = 0.139) and stage 4 (P = 0.211). The results showed no evidence to indicate a tendency for earlier skeletal maturation of girls or boys. Boys in the university group started their skeletal maturation later than boys in the Denver group and completed their maturation earlier. Gender was a significant factor affecting skeletal maturation stages in both Denver and university groups. © The Author 2016. Published by Oxford University Press on behalf of the European Orthodontic Society. All rights reserved. For permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Irawan, R.; Yong, B.; Kristiani, F.
2017-02-01
Bandung, one of the cities in Indonesia, is vulnerable to dengue disease for both early-stage (Dengue Fever) and severe-stage (Dengue Haemorrhagic Fever and Dengue Shock Syndrome). In 2013, there were 5,749 patients in Bandung and 2,032 of the patients were hospitalized in Santo Borromeus Hospital. In this paper, there are two models, Poisson-gamma and Log-normal models, that use Bayesian inference to estimate the value of the relative risk. The calculation is done by Markov Chain Monte Carlo method which is the simulation using Gibbs Sampling algorithm in WinBUGS 1.4.3 software. The analysis results for dengue disease of 30 sub-districts in Bandung in 2013 based on Santo Borromeus Hospital’s data are Coblong and Bandung Wetan sub-districts had the highest relative risk using both models for the early-stage, severe-stage, and all stages. Meanwhile, Cinambo sub-district had the lowest relative risk using both models for the severe-stage and all stages and BojongloaKaler sub-district had the lowest relative risk using both models for the early-stage. For the model comparison using DIC (Deviance Information Criterion) method, the Log-normal model is a better model for the early-stage and severe-stage, but for the all stages, the Poisson-gamma model is a better model which fits the data.
Cheng, Yu-Ching; Stanne, Tara M.; Giese, Anne-Katrin; Ho, Weang Kee; Traylor, Matthew; Amouyel, Philippe; Holliday, Elizabeth G.; Malik, Rainer; Xu, Huichun; Kittner, Steven J.; Cole, John W.; O’Connell, Jeffrey R.; Danesh, John; Rasheed, Asif; Zhao, Wei; Engelter, Stefan; Grond-Ginsbach, Caspar; Kamatani, Yoichiro; Lathrop, Mark; Leys, Didier; Thijs, Vincent; Metso, Tiina M.; Tatlisumak, Turgut; Pezzini, Alessandro; Parati, Eugenio A.; Norrving, Bo; Bevan, Steve; Rothwell, Peter M; Sudlow, Cathie; Slowik, Agnieszka; Lindgren, Arne; Walters, Matthew R; Jannes, Jim; Shen, Jess; Crosslin, David; Doheny, Kimberly; Laurie, Cathy C.; Kanse, Sandip M.; Bis, Joshua C.; Fornage, Myriam; Mosley, Thomas H.; Hopewell, Jemma C.; Strauch, Konstantin; Müller-Nurasyid, Martina; Gieger, Christian; Waldenberger, Melanie; Peters, Annette; Meisinger, Christine; Ikram, M. Arfan; Longstreth, WT; Meschia, James F.; Seshadri, Sudha; Sharma, Pankaj; Worrall, Bradford; Jern, Christina; Levi, Christopher; Dichgans, Martin; Boncoraglio, Giorgio B.; Markus, Hugh S.; Debette, Stephanie; Rolfs, Arndt; Saleheen, Danish; Mitchell, Braxton D.
2015-01-01
Background and Purpose Although a genetic contribution to ischemic stroke is well recognized, only a handful of stroke loci have been identified by large-scale genetic association studies to date. Hypothesizing that genetic effects might be stronger for early- versus late-onset stroke, we conducted a two-stage meta-analysis of genome-wide association studies (GWAS), focusing on stroke cases with an age of onset < 60 years old. Methods The Discovery stage of our GWAS included 4,505 cases and 21,968 controls of European, South-Asian and African ancestry, drawn from 6 studies. In Stage 2, we selected the lead genetic variants at loci with association P<5×10−6 and performed in silico association analyses in an independent sample of up to 1,003 cases and 7,745 controls. Results One stroke susceptibility locus at 10q25 reached genome-wide significance in the combined analysis of all samples from the Discovery and Follow-up Stages (rs11196288, OR=1.41, P=9.5×10−9). The associated locus is in an intergenic region between TCF7L2 and HABP2. In a further analysis in an independent sample, we found that two SNPs in high linkage disequilibrium with rs11196288 were significantly associated with total plasma factor VII-activating protease levels, a product of HABP2. Conclusions HABP2, which encodes an extracellular serine protease involved in coagulation, fibrinolysis, and inflammatory pathways, may be a genetic susceptibility locus for early-onset stroke. PMID:26732560
Online two-stage association method for robust multiple people tracking
NASA Astrophysics Data System (ADS)
Lv, Jingqin; Fang, Jiangxiong; Yang, Jie
2011-07-01
Robust multiple people tracking is very important for many applications. It is a challenging problem due to occlusion and interaction in crowded scenarios. This paper proposes an online two-stage association method for robust multiple people tracking. In the first stage, short tracklets generated by linking people detection responses grow longer by particle filter based tracking, with detection confidence embedded into the observation model. And, an examining scheme runs at each frame for the reliability of tracking. In the second stage, multiple people tracking is achieved by linking tracklets to generate trajectories. An online tracklet association method is proposed to solve the linking problem, which allows applications in time-critical scenarios. This method is evaluated on the popular CAVIAR dataset. The experimental results show that our two-stage method is robust.
Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
Results from the FIN-2 formal comparison
NASA Astrophysics Data System (ADS)
Connolly, Paul; Hoose, Corinna; Liu, Xiaohong; Moehler, Ottmar; Cziczo, Daniel; DeMott, Paul
2017-04-01
During the Fifth International Ice Nucleation Workshop (FIN-2) at the AIDA Ice Nucleation facility in Karlsruhe, Germany in March 2015, a formal comparison of ice nucleation measurement methods was conducted. During the experiments the samples of ice nucleating particles were not revealed to the instrument scientists, hence this was referred to as a "blind comparison". The two samples used were later revealed to be Arizona Test Dust and an Argentina soil sample. For these two samples seven mobile ice nucleating particle counters sampled directly from the AIDA chamber or from the aerosol preparation chamber at specified temperatures, whereas filter samples were taken for two offline deposition nucleation instruments. Wet suspension methods for determining IN concentrations were also used with 10 different methods employed. For the wet suspension methods experiments were conducted using INPs collected from the air inside the chambers (impinger sampling) and INPs taken from the bulk samples (vial sampling). Direct comparisons of the ice nucleating particle concentrations are reported as well as derived ice nucleation active site densities. The study highlights the difficulties in performing such analyses, but generally indicates that there is reasonable agreement between the wet suspension techniques. It is noted that ice nucleation efficiency derived from the AIDA chamber (quantified using the ice active surface site density approach) is higher than that for the cold stage techniques. This is both true for the Argentina soil sample and, to a lesser extent, for the Arizona Test Dust sample too. Other interesting effects were noted: for the ATD the impinger sampling demonstrated higher INP efficiency at higher temperatures (>255 K) than the vial sampling, but agreed at the lower temperatures (<255K), whereas the opposite was true for the Argentina soil sample. The results are analysed to better understand the performance of the various techniques and to address any size-sorting effects and / or sampling line loses.
Cui, Xueliang; Chen, Hui; Rui, Yunfeng; Niu, Yang; Li, He
2018-01-01
Objectives Two-stage open reduction and internal fixation (ORIF) and limited internal fixation combined with external fixation (LIFEF) are two widely used methods to treat Pilon injury. However, which method is superior to the other remains controversial. This meta-analysis was performed to quantitatively compare two-stage ORIF and LIFEF and clarify which method is better with respect to postoperative complications in the treatment of tibial Pilon fractures. Methods We conducted a meta-analysis to quantitatively compare the postoperative complications between two-stage ORIF and LIFEF. Eight studies involving 360 fractures in 359 patients were included in the meta-analysis. Results The two-stage ORIF group had a significantly lower risk of superficial infection, nonunion, and bone healing problems than the LIFEF group. However, no significant differences in deep infection, delayed union, malunion, arthritis symptoms, or chronic osteomyelitis were found between the two groups. Conclusion Two-stage ORIF was associated with a lower risk of postoperative complications with respect to superficial infection, nonunion, and bone healing problems than LIFEF for tibial Pilon fractures. Level of evidence 2.
NASA Astrophysics Data System (ADS)
Lestari, T. A.; Saefudin; Priyandoko, D.
2018-05-01
This research aims to analyze the correlation between concept mastery and moral stages of students. The research method using a correlational study with stratified random sampling technique. The population in this research is all of eleventh grade students in Senior High School Bandung. Data were collected from 297 eleventh grade students of three Senior High School in Bandung with use the instrument in the form of examination and stage of moral reasoning questionnaire. The stage of moral reasoning in this research consists of two student’s moral reasoning categories based on 16 questionnaire as the indicators from Jones et al. (2007). The results of this research shows that the average of eleventh grade student’s moral reasoning stage is the advanced stage. The results of this research shows that the concept mastery and the stage of moral reasoning indicates that there are 0.370 0f a positive correlation. This research provides an overview of eleventh grade student about concept mastery and stage of moral reasoning using socio-scientific issues.
Wu, Mixia; Shu, Yu; Li, Zhaohai; Liu, Aiyi
2016-01-01
A sequential design is proposed to test whether the accuracy of a binary diagnostic biomarker meets the minimal level of acceptance. The accuracy of a binary diagnostic biomarker is a linear combination of the marker’s sensitivity and specificity. The objective of the sequential method is to minimize the maximum expected sample size under the null hypothesis that the marker’s accuracy is below the minimal level of acceptance. The exact results of two-stage designs based on Youden’s index and efficiency indicate that the maximum expected sample sizes are smaller than the sample sizes of the fixed designs. Exact methods are also developed for estimation, confidence interval and p-value concerning the proposed accuracy index upon termination of the sequential testing. PMID:26947768
ERIC Educational Resources Information Center
Jenkins, Peter; Palmer, Joanne
2012-01-01
The primary objective of this study was to explore perceptions of UK school counsellors of confidentiality and information sharing in therapeutic work with children and young people, using qualitative methods. The research design employed a two-stage process, using questionnaires and follow-up interviews, with a small, non-random sample of school…
HASA: Hypersonic Aerospace Sizing Analysis for the Preliminary Design of Aerospace Vehicles
NASA Technical Reports Server (NTRS)
Harloff, Gary J.; Berkowitz, Brian M.
1988-01-01
A review of the hypersonic literature indicated that a general weight and sizing analysis was not available for hypersonic orbital, transport, and fighter vehicles. The objective here is to develop such a method for the preliminary design of aerospace vehicles. This report describes the developed methodology and provides examples to illustrate the model, entitled the Hypersonic Aerospace Sizing Analysis (HASA). It can be used to predict the size and weight of hypersonic single-stage and two-stage-to-orbit vehicles and transports, and is also relevant for supersonic transports. HASA is a sizing analysis that determines vehicle length and volume, consistent with body, fuel, structural, and payload weights. The vehicle component weights are obtained from statistical equations for the body, wing, tail, thermal protection system, landing gear, thrust structure, engine, fuel tank, hydraulic system, avionics, electral system, equipment payload, and propellant. Sample size and weight predictions are given for the Space Shuttle orbiter and other proposed vehicles, including four hypersonic transports, a Mach 6 fighter, a supersonic transport (SST), a single-stage-to-orbit (SSTO) vehicle, a two-stage Space Shuttle with a booster and an orbiter, and two methane-fueled vehicles.
Prevalence of infraocclusion of primary molars determined using a new 2D image analysis methodology.
Odeh, R; Mihailidis, S; Townsend, G; Lähdesmäki, R; Hughes, T; Brook, A
2016-06-01
The reported prevalence of infraocclusion varies widely, reflecting differences in definitions and measurement/scoring approaches. This study aimed to quantify the prevalence and extent of infraocclusion in singletons and twins during the late mixed dentition stage of dental development using a new diagnostic imaging method and objective criteria. The study also aimed to determine any associations between infraocclusion and gender, arch type, arch side and tooth type. Two samples were analysed, 1454 panoramic radiographs of singletons and 270 dental models of twins. Both samples ranged in age from 8 to 11 years. Adobe Photoshop CS5 was used to measure the extent of infraocclusion. Repeatability tests showed systematic and random errors were small. The prevalence in the maxilla was low (<1%), whereas the prevalence in the mandible was 22% in the singleton sample and 32% in the twin sample. The primary mandibular first molar was affected more often than the second molar. There was no significant difference in the expression between genders or sides. A new technique for measuring infraocclusion has been developed with high intra- and interoperator reproducibility. This method should enhance early diagnosis of tooth developmental abnormalities and treatment planning during the late mixed dentition stage of development. © 2016 Australian Dental Association.
Sayers, A; Heron, J; Smith, Adac; Macdonald-Wallis, C; Gilthorpe, M S; Steele, F; Tilling, K
2017-02-01
There is a growing debate with regards to the appropriate methods of analysis of growth trajectories and their association with prospective dependent outcomes. Using the example of childhood growth and adult BP, we conducted an extensive simulation study to explore four two-stage and two joint modelling methods, and compared their bias and coverage in estimation of the (unconditional) association between birth length and later BP, and the association between growth rate and later BP (conditional on birth length). We show that the two-stage method of using multilevel models to estimate growth parameters and relating these to outcome gives unbiased estimates of the conditional associations between growth and outcome. Using simulations, we demonstrate that the simple methods resulted in bias in the presence of measurement error, as did the two-stage multilevel method when looking at the total (unconditional) association of birth length with outcome. The two joint modelling methods gave unbiased results, but using the re-inflated residuals led to undercoverage of the confidence intervals. We conclude that either joint modelling or the simpler two-stage multilevel approach can be used to estimate conditional associations between growth and later outcomes, but that only joint modelling is unbiased with nominal coverage for unconditional associations.
System and method for laser assisted sample transfer to solution for chemical analysis
Van Berkel, Gary J; Kertesz, Vilmos
2014-01-28
A system and method for laser desorption of an analyte from a specimen and capturing of the analyte in a suspended solvent to form a testing solution are described. The method can include providing a specimen supported by a desorption region of a specimen stage and desorbing an analyte from a target site of the specimen with a laser beam centered at a radiation wavelength (.lamda.). The desorption region is transparent to the radiation wavelength (.lamda.) and the sampling probe and a laser source emitting the laser beam are on opposite sides of a primary surface of the specimen stage. The system can also be arranged where the laser source and the sampling probe are on the same side of a primary surface of the specimen stage. The testing solution can then be analyzed using an analytical instrument or undergo further processing.
NASA Astrophysics Data System (ADS)
Lu, Xin-Ming
Shallow junction formation made by low energy ion implantation and rapid thermal annealing is facing a major challenge for ULSI (ultra large scale integration) as the line width decreases down to the sub micrometer region. The issues include low beam current, the channeling effect in low energy ion implantation and TED (transient enhanced diffusion) during annealing after ion implantation. In this work, boron containing small cluster ions, such as GeB, SiB and SiB2, was generated by using the SNICS (source of negative ion by cesium sputtering) ion source to implant into Si substrates to form shallow junctions. The use of boron containing cluster ions effectively reduces the boron energy while keeping the energy of the cluster ion beam at a high level. At the same time, it reduces the channeling effect due to amorphization by co-implanted heavy atoms like Ge and Si. Cluster ions have been used to produce 0.65--2keV boron for low energy ion implantation. Two stage annealing, which is a combination of low temperature (550°C) preannealing and high temperature annealing (1000°C), was carried out to anneal the Si sample implanted by GeB, SiBn clusters. The key concept of two-step annealing, that is, the separation of crystal regrowth, point defects removal with dopant activation from dopant diffusion, is discussed in detail. The advantages of the two stage annealing include better lattice structure, better dopant activation and retarded boron diffusion. The junction depth of the two stage annealed GeB sample was only half that of the one-step annealed sample, indicating that TED was suppressed by two stage annealing. Junction depths as small as 30 nm have been achieved by two stage annealing of sample implanted with 5 x 10-4/cm2 of 5 keV GeB at 1000°C for 1 second. The samples were evaluated by SIMS (secondary ion mass spectrometry) profiling, TEM (transmission electron microscopy) and RBS (Rutherford Backscattering Spectrometry)/channeling. Cluster ion implantation in combination with two-step annealing is effective in fabricating ultra-shallow junctions.
A longitudinal study of Campylobacter distribution in a turkey production chain
Perko-Mäkelä, Päivikki; Isohanni, Pauliina; Katzav, Marianne; Lund, Marianne; Hänninen, Marja-Liisa; Lyhs, Ulrike
2009-01-01
Background Campylobacter is the most common cause of bacterial enteritis worldwide. Handling and eating of contaminated poultry meat has considered as one of the risk factors for human campylobacteriosis.Campylobacter contamination can occur at all stages of a poultry production cycle. The objective of this study was to determine the occurrence of Campylobacter during a complete turkey production cycle which lasts for 1,5 years of time. For detection of Campylobacter, a conventional culture method was compared with a PCR method. Campylobacter isolates from different types of samples have been identified to the species level by a multiplex PCR assay. Methods Samples (N = 456) were regularly collected from one turkey parent flock, the hatchery, six different commercial turkey farms and from 11 different stages at the slaughterhouse. For the detection of Campylobacter, a conventional culture and a PCR method were used. Campylobacter isolates (n = 143) were identified to species level by a multiplex PCR assay. Results No Campylobacter were detected in either the samples from the turkey parent flock or from hatchery samples using the culture method. PCR detected Campylobacter DNA in five faecal samples and one fluff and eggshell sample. Six flocks out of 12 commercial turkey flocks where found negative at the farm level but only two were negative at the slaughterhouse. Conclusion During the brooding period Campylobacter might have contact with the birds without spreading of the contamination within the flock. Contamination of working surfaces and equipment during slaughter of a Campylobacter positive turkey flock can persist and lead to possible contamination of negative flocks even after the end of the day's cleaning and desinfection. Reduction of contamination at farm by a high level of biosecurity control and hygiene may be one of the most efficient ways to reduce the amount of contaminated poultry meat in Finland. Due to the low numbers of Campylobacter in the Finnish turkey production chain, enrichment PCR seems to be the optimal detection method here. PMID:19348687
2012-01-01
Background Intimate partner violence (IPV) is a major public health problem with serious consequences for women’s physical, mental, sexual and reproductive health. Reproductive health outcomes such as unwanted and terminated pregnancies, fetal loss or child loss during infancy, non-use of family planning methods, and high fertility are increasingly recognized. However, little is known about the role of community influences on women's experience of IPV and its effect on terminated pregnancy, given the increased awareness of IPV being a product of social context. This study sought to examine the role of community-level norms and characteristics in the association between IPV and terminated pregnancy in Nigeria. Methods Multilevel logistic regression analyses were performed on nationally-representative cross-sectional data including 19,226 women aged 15–49 years in Nigeria. Data were collected by a stratified two-stage sampling technique, with 888 primary sampling units (PSUs) selected in the first sampling stage, and 7,864 households selected through probability sampling in the second sampling stage. Results Women who had experienced physical IPV, sexual IPV, and any IPV were more likely to have terminated a pregnancy compared to women who had not experienced these IPV types. IPV types were significantly associated with factors reflecting relationship control, relationship inequalities, and socio-demographic characteristics. Characteristics of the women aggregated at the community level (mean education, justifying wife beating, mean age at first marriage, and contraceptive use) were significantly associated with IPV types and terminated pregnancy. Conclusion Findings indicate the role of community influence in the association between IPV-exposure and terminated pregnancy, and stress the need for screening women seeking abortions for a history of abuse. PMID:23150987
Guerreiro, Rita; Ross, Owen A; Kun-Rodrigues, Celia; Hernandez, Dena G; Orme, Tatiana; Eicher, John D; Shepherd, Claire E; Parkkinen, Laura; Darwent, Lee; Heckman, Michael G; Scholz, Sonja W; Troncoso, Juan C; Pletnikova, Olga; Ansorge, Olaf; Clarimon, Jordi; Lleo, Alberto; Morenas-Rodriguez, Estrella; Clark, Lorraine; Honig, Lawrence S; Marder, Karen; Lemstra, Afina; Rogaeva, Ekaterina; St George-Hyslop, Peter; Londos, Elisabet; Zetterberg, Henrik; Barber, Imelda; Braae, Anne; Brown, Kristelle; Morgan, Kevin; Troakes, Claire; Al-Sarraj, Safa; Lashley, Tammaryn; Holton, Janice; Compta, Yaroslau; Van Deerlin, Vivianna; Serrano, Geidy E; Beach, Thomas G; Lesage, Suzanne; Galasko, Douglas; Masliah, Eliezer; Santana, Isabel; Pastor, Pau; Diez-Fairen, Monica; Aguilar, Miquel; Tienari, Pentti J; Myllykangas, Liisa; Oinas, Minna; Revesz, Tamas; Lees, Andrew; Boeve, Brad F; Petersen, Ronald C; Ferman, Tanis J; Escott-Price, Valentina; Graff-Radford, Neill; Cairns, Nigel J; Morris, John C; Pickering-Brown, Stuart; Mann, David; Halliday, Glenda M; Hardy, John; Trojanowski, John Q; Dickson, Dennis W; Singleton, Andrew; Stone, David J; Bras, Jose
2018-01-01
Summary Background Dementia with Lewy bodies is the second most common form of dementia in elderly people but has been overshadowed in the research field, partly because of similarities between dementia with Lewy bodies, Parkinson’s disease, and Alzheimer’s disease. So far, to our knowledge, no large-scale genetic study of dementia with Lewy bodies has been done. To better understand the genetic basis of dementia with Lewy bodies, we have done a genome-wide association study with the aim of identifying genetic risk factors for this disorder. Methods In this two-stage genome-wide association study, we collected samples from white participants of European ancestry who had been diagnosed with dementia with Lewy bodies according to established clinical or pathological criteria. In the discovery stage (with the case cohort recruited from 22 centres in ten countries and the controls derived from two publicly available database of Genotypes and Phenotypes studies [phs000404.v1.p1 and phs000982.v1.p1] in the USA), we performed genotyping and exploited the recently established Haplotype Reference Consortium panel as the basis for imputation. Pathological samples were ascertained following autopsy in each individual brain bank, whereas clinical samples were collected after participant examination. There was no specific timeframe for collection of samples. We did association analyses in all participants with dementia with Lewy bodies, and also only in participants with pathological diagnosis. In the replication stage, we performed genotyping of significant and suggestive results from the discovery stage. Lastly, we did a meta-analysis of both stages under a fixed-effects model and used logistic regression to test for association in each stage. Findings This study included 1743 patients with dementia with Lewy bodies (1324 with pathological diagnosis) and 4454 controls (1216 patients with dementia with Lewy bodies vs 3791 controls in the discovery stage; 527 vs 663 in the replication stage). Results confirm previously reported associations: APOE (rs429358; odds ratio [OR] 2·40, 95% CI 2·14–2·70; p=1·05 × 10−48), SNCA (rs7681440; OR 0·73, 0·66–0·81; p=6·39 × 10−10), and GBA (rs35749011; OR 2·55, 1·88–3·46; p=1·78 × 10−9). They also provide some evidence for a novel candidate locus, namely CNTN1 (rs7314908; OR 1·51, 1·27–1·79; p=2·32 × 10−6); further replication will be important. Additionally, we estimate the heritable component of dementia with Lewy bodies to be about 36%. Interpretation Despite the small sample size for a genome-wide association study, and acknowledging the potential biases from ascertaining samples from multiple locations, we present the most comprehensive and well powered genetic study in dementia with Lewy bodies so far. These data show that common genetic variability has a role in the disease. PMID:29263008
Raman spectroscopy for the assessment of acute myeloid leukemia: a proof of concept study
NASA Astrophysics Data System (ADS)
Vanna, R.; Tresoldi, C.; Ronchi, P.; Lenferink, A. T. M.; Morasso, C.; Mehn, D.; Bedoni, M.; Terstappen, L. W. M. M.; Ciceri, F.; Otto, C.; Gramatica, F.
2014-03-01
Acute myeloid leukemia (AML) is a proliferative neoplasm, that if not properly treated can rapidly cause a fatal outcome. The diagnosis of AML is challenging and the first diagnostic step is the count of the percentage of blasts (immature cells) in bone marrow and blood sample, and their morphological characterization. This evaluation is still performed manually with a bright field light microscope. Here we report results of a study applying Raman spectroscopy for analysis of samples from two patients affected by two AML subtypes characterized by a different maturation stage in the neutrophilic lineage. Ten representative cells per sample were selected and analyzed with high-resolution confocal Raman microscopy by scanning 64x64 (4096) points in a confocal layer through the volume of the whole cell. The average spectrum of each cell was then used to obtain a highly reproducible mean fingerprint of the two different AML subtypes. We demonstrate that Raman spectroscopy efficiently distinguishes these different AML subtypes. The molecular interpretation of the substantial differences between the subtypes is related to granulocytic enzymes (e.g. myeloperoxidase and cytochrome b558), in agreement with different stages of maturation of the two considered AML subtypes . These results are promising for the development of a new, objective, automated and label-free Raman based methods for the diagnosis and first assessment of AML.
Tensor Rank Preserving Discriminant Analysis for Facial Recognition.
Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo
2017-10-12
Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.
NASA Technical Reports Server (NTRS)
Tomberlin, T. J.
1985-01-01
Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.
Arıkan, İnci; Gülcan, Aynur; Dıbeklıoğlu, Saime Ergen
2016-09-01
The aim of the study was to determine the incidence of intestinal parasitic diseases (IPD) and associated factors in primary school students and to assess the knowledge and practices of mothers about these diseases. This is a cross-sectional study carried out in January-March 2014 in 471 students aged 5-11 years, studying at 3 schools randomly selected from the city centre regions with different socioeconomic levels. Stratified sampling method was used in the present study and the data were collected in two stages. In the first stage, parents were informed about the study and pre-prepared questionnaire forms were used to collect the data about the students and parents. In the second stage, laboratory analyses of collected stool samples were performed. The total prevalence of IPD was 18.3%, it was higher in the primary school located in a region with a lower socioeconomic level compared to other two schools (27.6% vs. 14.4%, and 10%, respectively). Most commonly detected parasite was E. vermicularis (12.1%). The prevalence of IPD was not associated with the classroom, gender, number of siblings, and the use of purified drinking water at home, while it was found to decrease with the increasing maternal education level. The maternal knowledge level score was 12.01±4.29 vs. 13.41±3.94 in students with and without IPD, respectively. With regard to the methods used to treat IPD, 23% of the mothers reported that they are using conventional methods. The health education programmes about the associated risk factors are of great importance for early detection and treatment of childhood parasitic infections. Copyright© by the National Institute of Public Health, Prague 2016
Apparatus and methods for controlling electron microscope stages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duden, Thomas
Methods and apparatus for generating an image of a specimen with a microscope (e.g., TEM) are disclosed. In one aspect, the microscope may generally include a beam generator, a stage, a detector, and an image generator. A plurality of crystal parameters, which describe a plurality of properties of a crystal sample, are received. In a display associated with the microscope, an interactive control sphere based at least in part on the received crystal parameters and that is rotatable by a user to different sphere orientations is presented. The sphere includes a plurality of stage coordinates that correspond to a pluralitymore » of positions of the stage and a plurality of crystallographic pole coordinates that correspond to a plurality of polar orientations of the crystal sample. Movement of the sphere causes movement of the stage, wherein the stage coordinates move in conjunction with the crystallographic coordinates represented by pole positions so as to show a relationship between stage positions and the pole positions.« less
Tai, Dean C.S.; Wang, Shi; Cheng, Chee Leong; Peng, Qiwen; Yan, Jie; Chen, Yongpeng; Sun, Jian; Liang, Xieer; Zhu, Youfu; Rajapakse, Jagath C.; Welsch, Roy E.; So, Peter T.C.; Wee, Aileen; Hou, Jinlin; Yu, Hanry
2014-01-01
Background & Aims There is increasing need for accurate assessment of liver fibrosis/cirrhosis. We aimed to develop qFibrosis, a fully-automated assessment method combining quantification of histopathological architectural features, to address unmet needs in core biopsy evaluation of fibrosis in chronic hepatitis B (CHB) patients. Methods qFibrosis was established as a combined index based on 87 parameters of architectural features. Images acquired from 25 Thioacetamide-treated rat samples and 162 CHB core biopsies were used to train and test qFibrosis and to demonstrate its reproducibility. qFibrosis scoring was analyzed employing Metavir and Ishak fibrosis staging as standard references, and collagen proportionate area (CPA) measurement for comparison. Results qFibrosis faithfully and reliably recapitulates Metavir fibrosis scores, as it can identify differences between all stages in both animal samples (p <0.001) and human biopsies (p <0.05). It is robust to sampling size, allowing for discrimination of different stages in samples of different sizes (area under the curve (AUC): 0.93–0.99 for animal samples: 1–16 mm2; AUC: 0.84–0.97 for biopsies: 10–44 mm in length). qFibrosis can significantly predict staging underestimation in suboptimal biopsies (<15 mm) and under- and over-scoring by different pathologists (p <0.001). qFibrosis can also differentiate between Ishak stages 5 and 6 (AUC: 0.73, p = 0.008), suggesting the possibility of monitoring intra-stage cirrhosis changes. Best of all, qFibrosis demonstrates superior performance to CPA on all counts. Conclusions qFibrosis can improve fibrosis scoring accuracy and throughput, thus allowing for reproducible and reliable analysis of efficacies of anti-fibrotic therapies in clinical research and practice. PMID:24583249
NASA Astrophysics Data System (ADS)
Peterson, Karl
Since the discovery in the late 1930s that air entrainment can improve the durability of concrete, it has been important for people to know the quantity, spacial distribution, and size distribution of the air-voids in their concrete mixes in order to ensure a durable final product. The task of air-void system characterization has fallen on the microscopist, who, according to a standard test method laid forth by the American Society of Testing and Materials, must meticulously count or measure about a thousand air-voids per sample as exposed on a cut and polished cross-section of concrete. The equipment used to perform this task has traditionally included a stereomicroscope, a mechanical stage, and a tally counter. Over the past 30 years, with the availability of computers and digital imaging, automated methods have been introduced to perform the same task, but using the same basic equipment. The method described here replaces the microscope and mechanical stage with an ordinary flatbed desktop scanner, and replaces the microscopist and tally counter with a personal computer; two pieces of equipment much more readily available than a microscope with a mechanical stage, and certainly easier to find than a person willing to sit for extended periods of time counting air-voids. Most laboratories that perform air-void system characterization typically have cabinets full of prepared samples with corresponding results from manual operators. Proponents of automated methods often take advantage of this fact by analyzing the same samples and comparing the results. A similar iterative approach is described here where scanned images collected from a significant number of samples are analyzed, the results compared to those of the manual operator, and the settings optimized to best approximate the results of the manual operator. The results of this calibration procedure are compared to an alternative calibration procedure based on the more rigorous digital image accuracy assessment methods employed primarily by the remote sensing/satellite imaging community.
Smith, D.R.; Rogala, J.T.; Gray, B.R.; Zigler, S.J.; Newton, T.J.
2011-01-01
Reliable estimates of abundance are needed to assess consequences of proposed habitat restoration and enhancement projects on freshwater mussels in the Upper Mississippi River (UMR). Although there is general guidance on sampling techniques for population assessment of freshwater mussels, the actual performance of sampling designs can depend critically on the population density and spatial distribution at the project site. To evaluate various sampling designs, we simulated sampling of populations, which varied in density and degree of spatial clustering. Because of logistics and costs of large river sampling and spatial clustering of freshwater mussels, we focused on adaptive and non-adaptive versions of single and two-stage sampling. The candidate designs performed similarly in terms of precision (CV) and probability of species detection for fixed sample size. Both CV and species detection were determined largely by density, spatial distribution and sample size. However, designs did differ in the rate that occupied quadrats were encountered. Occupied units had a higher probability of selection using adaptive designs than conventional designs. We used two measures of cost: sample size (i.e. number of quadrats) and distance travelled between the quadrats. Adaptive and two-stage designs tended to reduce distance between sampling units, and thus performed better when distance travelled was considered. Based on the comparisons, we provide general recommendations on the sampling designs for the freshwater mussels in the UMR, and presumably other large rivers.
OpenCL based machine learning labeling of biomedical datasets
NASA Astrophysics Data System (ADS)
Amoros, Oscar; Escalera, Sergio; Puig, Anna
2011-03-01
In this paper, we propose a two-stage labeling method of large biomedical datasets through a parallel approach in a single GPU. Diagnostic methods, structures volume measurements, and visualization systems are of major importance for surgery planning, intra-operative imaging and image-guided surgery. In all cases, to provide an automatic and interactive method to label or to tag different structures contained into input data becomes imperative. Several approaches to label or segment biomedical datasets has been proposed to discriminate different anatomical structures in an output tagged dataset. Among existing methods, supervised learning methods for segmentation have been devised to easily analyze biomedical datasets by a non-expert user. However, they still have some problems concerning practical application, such as slow learning and testing speeds. In addition, recent technological developments have led to widespread availability of multi-core CPUs and GPUs, as well as new software languages, such as NVIDIA's CUDA and OpenCL, allowing to apply parallel programming paradigms in conventional personal computers. Adaboost classifier is one of the most widely applied methods for labeling in the Machine Learning community. In a first stage, Adaboost trains a binary classifier from a set of pre-labeled samples described by a set of features. This binary classifier is defined as a weighted combination of weak classifiers. Each weak classifier is a simple decision function estimated on a single feature value. Then, at the testing stage, each weak classifier is independently applied on the features of a set of unlabeled samples. In this work, we propose an alternative representation of the Adaboost binary classifier. We use this proposed representation to define a new GPU-based parallelized Adaboost testing stage using OpenCL. We provide numerical experiments based on large available data sets and we compare our results to CPU-based strategies in terms of time and labeling speeds.
2017-01-01
Objective To determine whether less invasive endometrial (EM) aspiration biopsy is adequately accurate for evaluating treatment outcomes compared to the dilatation and curettage (D&C) biopsy in early-stage endometrial cancer (EC) patients treated with high dose oral progestin and levonorgestrel intrauterine system (LNG-IUS). Methods We conducted a prospective observational study with patients younger than 40 years who were diagnosed with clinical stage IA, The International Federation of Gynecology and Obstetrics grade 1 or 2 endometrioid adenocarcinoma and sought to maintain their fertility. The patients were treated with medroxyprogesterone acetate 500 mg/day and LNG-IUS. Treatment responses were evaluated every 3 months. EM aspiration biopsy was conducted after LNG-IUS removal followed D&C. The tissue samples were histologically compared. The diagnostic concordance rate of the two tests was examined with κ statistics. Results Twenty-eight pairs of EM samples were obtained from five patients. The diagnostic concordance rate of D&C and EM aspiration biopsy was 39.3% (κ value=0.26). Of the seven samples diagnosed as normal with D&C, three (42.8%) were diagnosed as normal by using EM aspiration biopsy. Of the eight samples diagnosed with endometrioid adenocarcinoma by using D&C, three (37.5%) were diagnosed with endometrioid adenocarcinoma by using EM aspiration biopsy. Of the 13 complex EM hyperplasia samples diagnosed with the D&C, five (38.5%) were diagnosed with EM hyperplasia by using EM aspiration biopsy. Of the samples obtained through EM aspiration, 46.4% were insufficient for histological evaluation. Conclusion To evaluate the treatment responses of patients with early-stage EC treated with high dose oral progestin and LNG-IUS, D&C should be conducted after LNG-IUS removal. PMID:27670255
D, Hekmatpou; L, Moeini; S, Haji-Nadali
2013-01-01
Objective: Wet cupping is a traditional bloodletting method recommended for controlling of respiratory disease complications. This study aimed to compare the efficacy of wet cupping vs. venesection on arterial O2 saturation level of smokers. Methods: This is a randomized controlled clinical trial which started with simple sampling of smokers. After administering spirometery, participants (N = 110 male smokers) with positive pulmonary function test (PFT), who manifested Chronic Obstructive Pulmonary Disease (COPD), were randomly assigned to intervention and control groups. The two groups were assessed in terms of demographic data, rate of hemoglobin (Hb), hematocrit (Hct), and arterial O2 saturation. Then, the intervention participants underwent wet cupping whereas venesection was performed on the control participants. At four stages after the two treatments, pulse oximetery was performed. Data was analyzed using SPSS (Version 17). Results: Result shows that mean arterial O2 sat level increased at three stages, namely before, immediately after, and 6 and 12 hrs after these two treatments (p ≤ 0.001). This indicates that wet cupping and venesection alike were effective on O2 sat level in the two groups, but the increasing pattern was maintained 12 hrs afterward only in those participants who had received wet cupping (p ≤ 0.001). Moreover, the results of repeated measure ANOVA between the two groups at the four stages showed that there were significant differences between the means of O2 saturation level at the 6- and 12-hrs stages (F = 66.92, p ≤ 0.001). Conclusion: Wet cupping caused a continued O2 saturation in the intervention group even up to 12 hrs afterward. Participants expressed liveliness and improved respiration after wet cupping. Therefore, wet cupping is recommended for promoting the health of cigarette smokers. PMID:24550951
DeepSleepNet: A Model for Automatic Sleep Stage Scoring Based on Raw Single-Channel EEG.
Supratak, Akara; Dong, Hao; Wu, Chao; Guo, Yike
2017-11-01
This paper proposes a deep learning model, named DeepSleepNet, for automatic sleep stage scoring based on raw single-channel EEG. Most of the existing methods rely on hand-engineered features, which require prior knowledge of sleep analysis. Only a few of them encode the temporal information, such as transition rules, which is important for identifying the next sleep stages, into the extracted features. In the proposed model, we utilize convolutional neural networks to extract time-invariant features, and bidirectional-long short-term memory to learn transition rules among sleep stages automatically from EEG epochs. We implement a two-step training algorithm to train our model efficiently. We evaluated our model using different single-channel EEGs (F4-EOG (left), Fpz-Cz, and Pz-Oz) from two public sleep data sets, that have different properties (e.g., sampling rate) and scoring standards (AASM and R&K). The results showed that our model achieved similar overall accuracy and macro F1-score (MASS: 86.2%-81.7, Sleep-EDF: 82.0%-76.9) compared with the state-of-the-art methods (MASS: 85.9%-80.5, Sleep-EDF: 78.9%-73.7) on both data sets. This demonstrated that, without changing the model architecture and the training algorithm, our model could automatically learn features for sleep stage scoring from different raw single-channel EEGs from different data sets without utilizing any hand-engineered features.
Method of and apparatus for testing the integrity of filters
Herman, R.L.
1985-05-07
A method of and apparatus are disclosed for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage. 5 figs.
Method of and apparatus for testing the integrity of filters
Herman, Raymond L [Richland, WA
1985-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstream upstream and downstream of such filter stage. Samples of the particle concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Methods of and apparatus for testing the integrity of filters
Herman, R.L.
1984-01-01
A method of and apparatus for testing the integrity of individual filters or filter stages of a multistage filtering system including a diffuser permanently mounted upstream and/or downstream of the filter stage to be tested for generating pressure differentials to create sufficient turbulence for uniformly dispersing trace agent particles within the airstram upstream and downstream of such filter stage. Samples of the particel concentration are taken upstream and downstream of the filter stage for comparison to determine the extent of particle leakage past the filter stage.
Tan, Ling; Hu, Yerong; Tao, Yongguang; Wang, Bin; Xiao, Jun; Tang, Zhenjie; Lu, Ting
2018-01-01
Background To identify whether RET is a potential target for NSCLC treatment, we examined the status of the RET gene in 631 early and mid stage NSCLC cases from south central China. Methods RET expression was identified by Western blot. RET‐positive expression samples were verified by immunohistochemistry. RET gene mutation, copy number variation, and rearrangement were analyzed by DNA Sanger sequencing, TaqMan copy number assays, and reverse transcription‐PCR. ALK and ROS1 expression levels were tested by Western blot and EGFR mutation using Sanger sequencing. Results The RET‐positive rate was 2.5% (16/631). RET‐positive expression was related to poorer tumor differentiation (P < 0.05). In the 16 RET‐positive samples, only two samples of moderately and poorly differentiated lung adenocarcinomas displayed RET rearrangement, both in RET‐KIF5B fusion partners. Neither ALK nor ROS1 translocation was found. The EGFR mutation rate in RET‐positive samples was significantly lower than in RET‐negative samples (P < 0.05). Conclusion RET‐positive expression in early and mid stage NSCLC cases from south central China is relatively low and is related to poorer tumor differentiation. RET gene alterations (copy number gain and rearrangement) exist in all RET‐positive samples. RET‐positive expression is a relatively independent factor in NSCLC patients, which indicates that the RET gene may be a novel target site for personalized treatment of NSCLC. PMID:29473341
Liu, Jin-Na; Xie, Xiao-Liang; Yang, Tai-Xin; Zhang, Cun-Li; Jia, Dong-Sheng; Liu, Ming; Wen, Chun-Xiu
2014-04-01
To study the different mature stages and the best processing methods on the quality of Trichosanthes kirilowii seeds. The content of 3,29-dibenzoyl rarounitriol in Trichosanthes kirilowii seeds was determined by HPLC. The sample of different mature stages such as immature, near mature and fully mature and processed by different methods were studied. Fully mature Trichosanthes kirilowii seeds were better than the immatured, and the best processing method was dried under 60degrees C, the content of 3,29-dibenzoyl rarounitriol reached up to 131.63microlg/mL. Different processing methods and different mature stages had a significant influence on the quality of Trichosanthes kirilowii seeds.
Vanamail, P; Subramanian, S; Srividya, A; Ravi, R; Krishnamoorthy, K; Das, P K
2006-08-01
Lot quality assurance sampling (LQAS) with two-stage sampling plan was applied for rapid monitoring of coverage after every round of mass drug administration (MDA). A Primary Health Centre (PHC) consisting of 29 villages in Thiruvannamalai district, Tamil Nadu was selected as the study area. Two threshold levels of coverage were used: threshold A (maximum: 60%; minimum: 40%) and threshold B (maximum: 80%; minimum: 60%). Based on these thresholds, one sampling plan each for A and B was derived with the necessary sample size and the number of allowable defectives (i.e. defectives mean those who have not received the drug). Using data generated through simple random sampling (SRSI) of 1,750 individuals in the study area, LQAS was validated with the above two sampling plans for its diagnostic and field applicability. Simultaneously, a household survey (SRSH) was conducted for validation and cost-effectiveness analysis. Based on SRSH survey, the estimated coverage was 93.5% (CI: 91.7-95.3%). LQAS with threshold A revealed that by sampling a maximum of 14 individuals and by allowing four defectives, the coverage was >or=60% in >90% of villages at the first stage. Similarly, with threshold B by sampling a maximum of nine individuals and by allowing four defectives, the coverage was >or=80% in >90% of villages at the first stage. These analyses suggest that the sampling plan (14,4,52,25) of threshold A may be adopted in MDA to assess if a minimum coverage of 60% has been achieved. However, to achieve the goal of elimination, the sampling plan (9, 4, 42, 29) of threshold B can identify villages in which the coverage is <80% so that remedial measures can be taken. Cost-effectiveness analysis showed that both options of LQAS are more cost-effective than SRSH to detect a village with a given level of coverage. The cost per village was US dollars 76.18 under SRSH. The cost of LQAS was US dollars 65.81 and 55.63 per village for thresholds A and B respectively. The total financial cost of classifying a village correctly with the given threshold level of LQAS could be reduced by 14% and 26% of the cost of conventional SRSH method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belinsky, Steven A; Palmisano, William A
A molecular marker-based method for monitoring and detecting cancer in humans. Aberrant methylation of gene promoters is a marker for cancer risk in humans. A two-stage, or "nested" polymerase chain reaction method is disclosed for detecting methylated DNA sequences at sufficiently high levels of sensitivity to permit cancer screening in biological fluid samples, such as sputum, obtained non-invasively. The method is for detecting the aberrant methylation of the p16 gene, O 6-methylguanine-DNA methyltransferase gene, Death-associated protein kinase gene, RAS-associated family 1 gene, or other gene promoters. The method offers a potentially powerful approach to population-based screening for the detection ofmore » lung and other cancers.« less
Stage Structure of Moral Development: A Comparison of Alternative Models.
ERIC Educational Resources Information Center
Hau, Kit-Tai
This study evaluated the stage structure of several quasi-simplex and non-simplex models of moral development in two domains of moral development in a British and a Chinese sample. Analyses were based on data reported by Sachs (1992): the Chinese sample consisted of 1,005 students from grade 9 to post-college, and the British sample consisted of…
Method for measuring the size distribution of airborne rhinovirus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Russell, M.L.; Goth-Goldstein, R.; Apte, M.G.
About 50% of viral-induced respiratory illnesses are caused by the human rhinovirus (HRV). Measurements of the concentrations and sizes of bioaerosols are critical for research on building characteristics, aerosol transport, and mitigation measures. We developed a quantitative reverse transcription-coupled polymerase chain reaction (RT-PCR) assay for HRV and verified that this assay detects HRV in nasal lavage samples. A quantitation standard was used to determine a detection limit of 5 fg of HRV RNA with a linear range over 1000-fold. To measure the size distribution of HRV aerosols, volunteers with a head cold spent two hours in a ventilated research chamber.more » Airborne particles from the chamber were collected using an Andersen Six-Stage Cascade Impactor. Each stage of the impactor was analyzed by quantitative RT-PCR for HRV. For the first two volunteers with confirmed HRV infection, but with mild symptoms, we were unable to detect HRV on any stage of the impactor.« less
Baiyewu, Olusegun; Smith-Gamble, Valerie; Lane, Kathleen A.; Gureje, Oye; Gao, Sujuan; Ogunniyi, Adesola; Unverzagt, Frederick W.; Hall, Kathleen S.; Hendrie, Hugh C.
2010-01-01
Background This is a community-based longitudinal epidemiological comparative study of elderly African Americans in Indianapolis and elderly Yoruba in Ibadan, Nigeria. Method A two-stage study was designed in which community-based individuals were first screened using the Community Screening Interview for Dementia. The second stage was a full clinical assessment, which included use of the Geriatric Depression Scale, of a smaller sub-sample of individuals selected on the basis of their performance in the screening interview. Prevalence of depression was estimated using sampling weights according to the sampling stratification scheme for clinical assessment. Results Some 2627 individuals were evaluated at the first stage in Indianapolis and 2806 in Ibadan. All were aged 69 years and over. Of these, 451 (17.2%) underwent clinical assessment in Indianapolis, while 605 (21.6%) were assessed in Ibadan. The prevalence estimates of both mild and severe depression were similar for the two sites (p = 0.1273 and p = 0.7093): 12.3% (mild depression) and 2.2% (severe depression) in Indianapolis and 19.8% and 1.6% respectively in Ibadan. Some differences were identified in association with demographic characteristics; for example, Ibadan men had a significantly higher prevalence of mild depression than Indianapolis men (p < 0.0001). Poor cognitive performance was associated with significantly higher rates of depression in Yoruba (p = 0.0039). Conclusion Prevalence of depression was similar for elderly African Americans and Yoruba despite considerable socioeconomic and cultural differences between these populations. PMID:17506912
Evaluation of bias and logistics in a survey of adults at increased risk for oral health decrements.
Gilbert, G H; Duncan, R P; Kulley, A M; Coward, R T; Heft, M W
1997-01-01
Designing research to include sufficient respondents in groups at highest risk for oral health decrements can present unique challenges. Our purpose was to evaluate bias and logistics in this survey of adults at increased risk for oral health decrements. We used a telephone survey methodology that employed both listed numbers and random digit dialing to identify dentate persons 45 years old or older and to oversample blacks, poor persons, and residents of nonmetropolitan counties. At a second stage, a subsample of the respondents to the initial telephone screening was selected for further study, which consisted of a baseline in-person interview and a clinical examination. We assessed bias due to: (1) limiting the sample to households with telephones, (2) using predominantly listed numbers instead of random digit dialing, and (3) nonresponse at two stages of data collection. While this approach apparently created some biases in the sample, they were small in magnitude. Specifically, limiting the sample to households with telephones biased the sample overall toward more females, larger households, and fewer functionally impaired persons. Using predominantly listed numbers led to a modest bias toward selection of persons more likely to be younger, healthier, female, have had a recent dental visit, and reside in smaller households. Blacks who were selected randomly at a second stage were more likely to participate in baseline data gathering than their white counterparts. Comparisons of the data obtained in this survey with those from recent national surveys suggest that this methodology for sampling high-risk groups did not substantively bias the sample with respect to two important dental parameters, prevalence of edentulousness and dental care use, nor were conclusions about multivariate associations with dental care recency substantively affected. This method of sampling persons at high risk for oral health decrements resulted in only modest bias with respect to the population of interest.
Two-stage microfluidic chip for selective isolation of circulating tumor cells (CTCs).
Hyun, Kyung-A; Lee, Tae Yoon; Lee, Su Hyun; Jung, Hyo-Il
2015-05-15
Over the past few decades, circulating tumor cells (CTCs) have been studied as a means of overcoming cancer. However, the rarity and heterogeneity of CTCs have been the most significant hurdles in CTC research. Many techniques for CTC isolation have been developed and can be classified into positive enrichment (i.e., specifically isolating target cells using cell size, surface protein expression, and so on) and negative enrichment (i.e., specifically eluting non-target cells). Positive enrichment methods lead to high purity, but could be biased by their selection criteria, while the negative enrichment methods have relatively low purity, but can isolate heterogeneous CTCs. To compensate for the known disadvantages of the positive and negative enrichments, in this study we introduced a two-stage microfluidic chip. The first stage involves a microfluidic magnetic activated cell sorting (μ-MACS) chip to elute white blood cells (WBCs). The second stage involves a geometrically activated surface interaction (GASI) chip for the selective isolation of CTCs. We observed up to 763-fold enrichment in cancer cells spiked into 5 mL of blood sample using the μ-MACS chip at 400 μL/min flow rate. Cancer cells were successfully separated with separation efficiencies ranging from 10.19% to 22.91% based on their EpCAM or HER2 surface protein expression using the GASI chip at a 100 μL/min flow rate. Our two-stage microfluidic chips not only isolated CTCs from blood cells, but also classified heterogeneous CTCs based on their characteristics. Therefore, our chips can contribute to research on CTC heterogeneity of CTCs, and, by extension, personalized cancer treatment. Copyright © 2014 Elsevier B.V. All rights reserved.
Cotruta, Bogdan; Gheorghe, Cristian; Iacob, Razvan; Dumbrava, Mona; Radu, Cristina; Bancila, Ion; Becheanu, Gabriel
2017-12-01
Evaluation of severity and extension of gastric atrophy and intestinal metaplasia is recommended to identify subjects with a high risk for gastric cancer. The inter-observer agreement for the assessment of gastric atrophy is reported to be low. The aim of the study was to evaluate the inter-observer agreement for the assessment of severity and extension of gastric atrophy using oriented and unoriented gastric biopsy samples. Furthermore, the quality of biopsy specimens in oriented and unoriented samples was analyzed. A total of 35 subjects with dyspeptic symptoms addressed for gastrointestinal endoscopy that agreed to enter the study were prospectively enrolled. The OLGA/OLGIM gastric biopsies protocol was used. From each subject two sets of biopsies were obtained (four from the antrum, two oriented and two unoriented, two from the gastric incisure, one oriented and one unoriented, four from the gastric body, two oriented and two unoriented). The orientation of the biopsy samples was completed using nitrocellulose filters (Endokit®, BioOptica, Milan, Italy). The samples were blindly examined by two experienced pathologists. Inter-observer agreement was evaluated using kappa statistic for inter-rater agreement. The quality of histopathology specimens taking into account the identification of lamina propria was analyzed in oriented vs. unoriented samples. The samples with detectable lamina propria mucosae were defined as good quality specimens. Categorical data was analyzed using chi-square test and a two-sided p value <0.05 was considered statistically significant. A total of 350 biopsy samples were analyzed (175 oriented / 175 unoriented). The kappa index values for oriented/unoriented OLGA 0/I/II/III and IV stages have been 0.62/0.13, 0.70/0.20, 0.61/0.06, 0.62/0.46, and 0.77/0.50, respectively. For OLGIM 0/I/II/III stages the kappa index values for oriented/unoriented samples were 0.83/0.83, 0.88/0.89, 0.70/0.88 and 0.83/1, respectively. No case of OLGIM IV stage was found in the present case series. Good quality histopathology specimens were described in 95.43% of the oriented biopsy samples, and in 89.14% of the unoriented biopsy samples, respectively (p=0.0275). The orientation of gastric biopsies specimens improves the inter-observer agreement for the assessment of gastric atrophy.
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
Nagai, Yuichiro; Yokoyama, Tetsuya
2014-05-20
A new two-stage chemical separation method was established using an anion exchange resin, Eichrom 1 × 8, to separate Mo and W from four natural rock samples. First, the distribution coefficients of nine elements (Ti, Fe, Zn, Zr, Nb, Mo, Hf, Ta, and W) under various chemical conditions were determined using HCl, HNO3, and HF. On the basis of the obtained distribution coefficients, a new technique for the two-stage chemical separation of Mo and W, along with the group separation of Ti-Zr-Hf, was developed as follows: 0.4 M HCl-0.5 M HF (major elements), 9 M HCl-0.05 M HF (Ti-Zr-Hf), 9 M HCl-1 M HF (W), and 6 M HNO3-3 M HF (Mo). After the chemical procedure, Nb remaining in the W fraction was separated using 9 M HCl-3 M HF. On the other hand, Nb and Zn remaining in the Mo fraction were removed using 2 M HF and 6 M HCl-0.1 M HF. The performance of this technique was evaluated by separating these elements from two terrestrial and two extraterrestrial samples. The recovery yields for Mo, W, Zr, and Hf were nearly 100% for all of the examined samples. The total contents of the Zr, Hf, W, and Mo in the blanks used for the chemical separation procedure were 582, 9, 29, and 396 pg, respectively. Therefore, our new separation technique can be widely used in various fields of geochemistry, cosmochemistry, and environmental sciences and particularly for multi-isotope analysis of these elements from a single sample with significant internal isotope heterogeneities.
A sampling design framework for monitoring secretive marshbirds
Johnson, D.H.; Gibbs, J.P.; Herzog, M.; Lor, S.; Niemuth, N.D.; Ribic, C.A.; Seamans, M.; Shaffer, T.L.; Shriver, W.G.; Stehman, S.V.; Thompson, W.L.
2009-01-01
A framework for a sampling plan for monitoring marshbird populations in the contiguous 48 states is proposed here. The sampling universe is the breeding habitat (i.e. wetlands) potentially used by marshbirds. Selection protocols would be implemented within each of large geographical strata, such as Bird Conservation Regions. Site selection will be done using a two-stage cluster sample. Primary sampling units (PSUs) would be land areas, such as legal townships, and would be selected by a procedure such as systematic sampling. Secondary sampling units (SSUs) will be wetlands or portions of wetlands in the PSUs. SSUs will be selected by a randomized spatially balanced procedure. For analysis, the use of a variety of methods as a means of increasing confidence in conclusions that may be reached is encouraged. Additional effort will be required to work out details and implement the plan.
Implicit Runge-Kutta Methods with Explicit Internal Stages
NASA Astrophysics Data System (ADS)
Skvortsov, L. M.
2018-03-01
The main computational costs of implicit Runge-Kutta methods are caused by solving a system of algebraic equations at every step. By introducing explicit stages, it is possible to increase the stage (or pseudo-stage) order of the method, which makes it possible to increase the accuracy and avoid reducing the order in solving stiff problems, without additional costs of solving algebraic equations. The paper presents implicit methods with an explicit first stage and one or two explicit internal stages. The results of solving test problems are compared with similar methods having no explicit internal stages.
Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard
2007-01-01
Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy. PMID:17543100
Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, İrem Ersöz
2013-01-01
Objective: The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Study Design: Simulation study. Material and Methods: SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Results: Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. Conclusion: It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values. PMID:25207065
Rafieenia, Razieh; Girotto, Francesca; Peng, Wei; Cossu, Raffaello; Pivato, Alberto; Raga, Roberto; Lavagnolo, Maria Cristina
2017-01-01
Aerobic pre-treatment was applied prior to two-stage anaerobic digestion process. Three different food wastes samples, namely carbohydrate rich, protein rich and lipid rich, were prepared as substrates. Effect of aerobic pre-treatment on hydrogen and methane production was studied. Pre-aeration of substrates showed no positive impact on hydrogen production in the first stage. All three categories of pre-aerated food wastes produced less hydrogen compared to samples without pre-aeration. In the second stage, methane production increased for aerated protein rich and carbohydrate rich samples. In addition, the lag phase for carbohydrate rich substrate was shorter for aerated samples. Aerated protein rich substrate yielded the best results among substrates for methane production, with a cumulative production of approximately 351ml/gVS. With regard to non-aerated substrates, lipid rich was the best substrate for CH 4 production (263ml/gVS). Pre-aerated P substrate was the best in terms of total energy generation which amounted to 9.64kJ/gVS. This study revealed aerobic pre-treatment to be a promising option for use in achieving enhanced substrate conversion efficiencies and CH 4 production in a two-stage AD process, particularly when the substrate contains high amounts of proteins. Copyright © 2016 Elsevier Ltd. All rights reserved.
Development of a Two-Stage Mars Ascent Vehicle Using In-Situ Propellant Production
NASA Technical Reports Server (NTRS)
Paxton, Laurel; Vaughan, David
2014-01-01
Mars Sample Return (MSR) and Mars In-Situ Resource Utilization (ISRU) present two main challenges for the advancement of Mars science. MSR would demonstrate Mars lift-off capability, while ISRU would test the ability to produce fuel and oxidizer using Martian resources, a crucial step for future human missions. A two-stage Mars Ascent Vehicle (MAV) concept was developed to support sample return as well as in-situ propellant production. The MAV would be powered by a solid rocket first stage and a LOX-propane second stage. A liquid second-stage provides higher orbit insertion reliability than a solid second stage as well as a degree of complexity eventually required for manned missions. Propane in particular offers comparable performance to methane without requiring cryogenic storage. The total MAV mass would be 119.9 kg to carry an 11 kg payload to orbit. The feasibility of in-situ fuel and oxidizer production was also examined. Two potential schemes were evaluated for production capability, size and power requirements. The schemes examined utilize CO2 and water as starting blocks to produce LOX and a propane blend. The infrastructure required to fuel and launch the MAV was also explored.
Grigor'eva, L A; Markov, A V
2011-01-01
PCR identification of host DNA in unfed females and males of taiga tick Ixodes persulcatus was performed. Amplification of each sample was done using primers species-specific by 12S rDNA mitochondrial gene. Four species of small mammals (Apodemus uralensis, Clethrionomys glareolus, Microtus arvalis, and Sorex araneus) and two passeriform bird species (Fringilla coelebs and Parus major) were analysed. For one third of tick samples, hosts of previous stages were established using this method. In five cases, feeding on more than one host species was detected.
Note: A wide temperature range MOKE system with annealing capability.
Chahil, Narpinder Singh; Mankey, G J
2017-07-01
A novel sample stage integrated with a longitudinal MOKE system has been developed for wide temperature range measurements and annealing capabilities in the temperature range 65 K < T < 760 K. The sample stage incorporates a removable platen and copper block with inserted cartridge heater and two thermocouple sensors. It is supported and thermally coupled to a cold finger with two sapphire bars. The sapphire based thermal coupling enables the system to perform at higher temperatures without adversely affecting the cryostat and minimizes thermal drift in position. In this system the hysteresis loops of magnetic samples can be measured simultaneously while annealing the sample in a magnetic field.
Perinetti, G; Perillo, L; Franchi, L; Di Lenarda, R; Contardo, L
2014-11-01
Diagnostic agreement on individual basis between the third middle phalanx maturation (MPM) method and the cervical vertebral maturation (CVM) method has conjecturally been based mainly on overall correlation analyses. Herein, the true agreement between methods according to stage and sex has been evaluated through a comprehensive diagnostic performance analysis. Four hundred and fifty-one Caucasian subjects were included in the study, 231 females and 220 males (mean age, 12.2 ± 2.5 years; range, 7.0-17.9 years). The X-rays of the middle phalanx of the third finger and the lateral cephalograms were examined for staging by blinded operators, blinded for MPM stages and subjects' age. The MPM and CVM methods based on six stages, two pre-pubertal (1 and 2), two pubertal (3 and 4), and two post-pubertal (5 and 6), were considered. Specifically, for each MPM stage, the diagnostic performance in the identification of the corresponding CVM stage was described by Bayesian statistics. For both sexes, overall agreement was 77.6%. Most of the disagreement was due to 1 stage apart. Slight disagreement was seen for the stages 5 and 6, where the third middle phalanx shows an earlier maturation. The two maturational methods show an overall satisfactorily diagnostic agreement. However, at post-pubertal stages, the middle phalanx of the third finger appears to mature earlier than the cervical vertebrae. Post-pubertal growth phase should thus be based on the presence of stage 6 in MPM. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Whitehead, John; Valdés-Márquez, Elsa; Lissmats, Agneta
2009-01-01
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to be stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase II and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail. Copyright 2008 John Wiley & Sons, Ltd.
An adaptive two-stage dose-response design method for establishing proof of concept.
Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R
2013-01-01
We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.
Lee, Minjung; Dignam, James J.; Han, Junhee
2014-01-01
We propose a nonparametric approach for cumulative incidence estimation when causes of failure are unknown or missing for some subjects. Under the missing at random assumption, we estimate the cumulative incidence function using multiple imputation methods. We develop asymptotic theory for the cumulative incidence estimators obtained from multiple imputation methods. We also discuss how to construct confidence intervals for the cumulative incidence function and perform a test for comparing the cumulative incidence functions in two samples with missing cause of failure. Through simulation studies, we show that the proposed methods perform well. The methods are illustrated with data from a randomized clinical trial in early stage breast cancer. PMID:25043107
Nested methylation-specific polymerase chain reaction cancer detection method
Belinsky, Steven A [Albuquerque, NM; Palmisano, William A [Edgewood, NM
2007-05-08
A molecular marker-based method for monitoring and detecting cancer in humans. Aberrant methylation of gene promoters is a marker for cancer risk in humans. A two-stage, or "nested" polymerase chain reaction method is disclosed for detecting methylated DNA sequences at sufficiently high levels of sensitivity to permit cancer screening in biological fluid samples, such as sputum, obtained non-invasively. The method is for detecting the aberrant methylation of the p16 gene, O 6-methylguanine-DNA methyltransferase gene, Death-associated protein kinase gene, RAS-associated family 1 gene, or other gene promoters. The method offers a potentially powerful approach to population-based screening for the detection of lung and other cancers.
Spring frost vulnerability of sweet cherries under controlled conditions
NASA Astrophysics Data System (ADS)
Matzneller, Philipp; Götz, Klaus-P.; Chmielewski, Frank-M.
2016-01-01
Spring frost is a significant production hazard in nearly all temperate fruit-growing regions. Sweet cherries are among the first fruit varieties starting their development in spring and therefore highly susceptible to late frost. Temperatures at which injuries are likely to occur are widely published, but their origin and determination methods are not well documented. In this study, a standardized method was used to investigate critical frost temperatures for the sweet cherry cultivar `Summit' under controlled conditions. Twigs were sampled at four development stages ("side green," "green tip," "open cluster," "full bloom") and subjected to three frost temperatures (-2.5, -5.0, -10.0 °C). The main advantage of this method, compared to other approaches, was that the exposition period and the time interval required to reach the target temperature were always constant (2 h). Furthermore, then, the twigs were placed in a climate chamber until full bloom, before the examination of the flowers and not further developed buds started. For the first two sampling stages (side green, green tip), the number of buds found in open cluster, "first white," and full bloom at the evaluation date decreased with the strength of the frost treatment. The flower organs showed different levels of cold hardiness and became more vulnerable in more advanced development stages. In this paper, we developed four empirical functions which allow calculating possible frost damages on sweet cherry buds or flowers at the investigated development stages. These equations can help farmers to estimate possible frost damages on cherry buds due to frost events. However, it is necessary to validate the critical temperatures obtained in laboratory with some field observations.
Spring frost vulnerability of sweet cherries under controlled conditions.
Matzneller, Philipp; Götz, Klaus-P; Chmielewski, Frank-M
2016-01-01
Spring frost is a significant production hazard in nearly all temperate fruit-growing regions. Sweet cherries are among the first fruit varieties starting their development in spring and therefore highly susceptible to late frost. Temperatures at which injuries are likely to occur are widely published, but their origin and determination methods are not well documented. In this study, a standardized method was used to investigate critical frost temperatures for the sweet cherry cultivar 'Summit' under controlled conditions. Twigs were sampled at four development stages ("side green," "green tip," "open cluster," "full bloom") and subjected to three frost temperatures (-2.5, -5.0, -10.0 °C). The main advantage of this method, compared to other approaches, was that the exposition period and the time interval required to reach the target temperature were always constant (2 h). Furthermore, then, the twigs were placed in a climate chamber until full bloom, before the examination of the flowers and not further developed buds started. For the first two sampling stages (side green, green tip), the number of buds found in open cluster, "first white," and full bloom at the evaluation date decreased with the strength of the frost treatment. The flower organs showed different levels of cold hardiness and became more vulnerable in more advanced development stages. In this paper, we developed four empirical functions which allow calculating possible frost damages on sweet cherry buds or flowers at the investigated development stages. These equations can help farmers to estimate possible frost damages on cherry buds due to frost events. However, it is necessary to validate the critical temperatures obtained in laboratory with some field observations.
NASA Astrophysics Data System (ADS)
Irfiana, D.; Utami, R.; Khasanah, L. U.; Manuhara, G. J.
2017-04-01
The purpose of this study was to determine the effect of two stage cinnamon bark oleoresin microcapsules (0%, 0.5% and 1%) on the TPC (Total Plate Count), TBA (thiobarbituric acid), pH, and RGB color (Red, Green, and Blue) of vacuum-packed ground beef during refrigerated storage (at 0, 4, 8, 12, and 16 days). This study showed that the addition of two stage cinnamon bark oleoresin microcapsules affected the quality of vacuum-packed ground beef during 16 days of refrigerated storage. The results showed that the TPC value of the vacuum-packed ground beef sample with the addition 0.5% and 1% microcapsules was lower than the value of control sample. The TPC value of the control sample, sample with additional 0.5% and 1% microcapsules were 5.94; 5.46; and 5.16 log CFU/g respectively. The TBA value of vacuum-packed ground beef were 0.055; 0.041; and 0.044 mg malonaldehyde/kg, resepectively on the 16th day of storage. The addition of two-stage cinnamon bark oleoresin microcapsules could inhibit the growth of microbia and decrease the oxidation process of vacuum-packed ground beef. Moreover, the change of vacuum-packed ground beef pH and RGB color with the addition 0.5% and 1% microcapsules were less than those of the control sample. The addition of 1% microcapsules showed the best effect in preserving the vacuum-packed ground beef.
NASA Astrophysics Data System (ADS)
Haq, Quazi M. I.; Mabood, Fazal; Naureen, Zakira; Al-Harrasi, Ahmed; Gilani, Sayed A.; Hussain, Javid; Jabeen, Farah; Khan, Ajmal; Al-Sabari, Ruqaya S. M.; Al-khanbashi, Fatema H. S.; Al-Fahdi, Amira A. M.; Al-Zaabi, Ahoud K. A.; Al-Shuraiqi, Fatma A. M.; Al-Bahaisi, Iman M.
2018-06-01
Nucleic acid & serology based methods have revolutionized plant disease detection, however, they are not very reliable at asymptomatic stage, especially in case of pathogen with systemic infection, in addition, they need at least 1-2 days for sample harvesting, processing, and analysis. In this study, two reflectance spectroscopies i.e. Near Infrared reflectance spectroscopy (NIR) and Fourier-Transform-Infrared spectroscopy with Attenuated Total Reflection (FT-IR, ATR) coupled with multivariate exploratory methods like Principle Component Analysis (PCA) and Partial least square discriminant analysis (PLS-DA) have been deployed to detect begomovirus infection in papaya leaves. The application of those techniques demonstrates that they are very useful for robust in vivo detection of plant begomovirus infection. These methods are simple, sensitive, reproducible, precise, and do not require any lengthy samples preparation procedures.
Olekšáková, Tereza; Žurovcová, Martina; Klimešová, Vanda; Barták, Miroslav; Šuláková, Hana
2018-04-01
Several methods of DNA extraction, coupled with 'DNA barcoding' species identification, were compared using specimens from early developmental stages of forensically important flies from the Calliphoridae and Sarcophagidae families. DNA was extracted at three immature stages - eggs, the first instar larvae, and empty pupal cases (puparia) - using four different extraction methods, namely, one simple 'homemade' extraction buffer protocol and three commercial kits. The extraction conditions, including the amount of proteinase K and incubation times, were optimized. The simple extraction buffer method was successful for half of the eggs and for the first instar larval samples. The DNA Lego Kit and DEP-25 DNA Extraction Kit were useful for DNA extractions from the first instar larvae samples, and the DNA Lego Kit was also successful regarding the extraction from eggs. The QIAamp DNA mini kit was the most effective; the extraction was successful with regard to all sample types - eggs, larvae, and pupari.
Kim, Da-Hye; Oh, Jeong-Eun
2017-05-01
Human hair has many advantages as a non-invasive sample; however, analytical methods for detecting perfluoroalkyl substances (PFASs) in human hair are still in the development stage. Therefore, the aim of this study was to develop and validate a method for monitoring 11 PFASs in human hair. Solid-phase extraction (SPE), ion-pairing extraction (IPE), a combined method (SPE+IPE) and solvent extraction with ENVI-carb clean-up were compared to develop an optimal extraction method using two types of hair sample (powder and piece forms). Analysis of PFASs was performed using liquid chromatography and tandem mass spectrometry. Among the four different extraction procedures, the SPE method using powdered hair showed the best extraction efficiency and recoveries ranged from 85.8 to 102%. The method detection limits for the SPE method were 0.114-0.796 ng/g and good precision (below 10%) and accuracy (66.4-110%) were obtained. In light of these results, SPE is considered the optimal method for PFAS extraction from hair. It was also successfully used to detect PFASs in human hair samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Two-stage Improvement Method for Robot Based 3D Surface Scanning
NASA Astrophysics Data System (ADS)
He, F. B.; Liang, Y. D.; Wang, R. F.; Lin, Y. S.
2018-03-01
As known that the surface of unknown object was difficult to measure or recognize precisely, hence the 3D laser scanning technology was introduced and used properly in surface reconstruction. Usually, the surface scanning speed was slower and the scanning quality would be better, while the speed was faster and the quality would be worse. In this case, the paper presented a new two-stage scanning method in order to pursuit the quality of surface scanning in a faster speed. The first stage was rough scanning to get general point cloud data of object’s surface, and then the second stage was specific scanning to repair missing regions which were determined by chord length discrete method. Meanwhile, a system containing a robotic manipulator and a handy scanner was also developed to implement the two-stage scanning method, and relevant paths were planned according to minimum enclosing ball and regional coverage theories.
NASA Astrophysics Data System (ADS)
Romanelli, Maurizio; Di Benedetto, Francesco; Fornaciai, Gabriele; Innocenti, Massimo; Montegrossi, Giordano; Pardi, Luca A.; Zoleo, Alfonso; Capacci, Fabio
2015-05-01
A study is undertaken to ascertain whether changes in the speciation of inorganic radicals are occurring during the ceramic industrial production that involves abundant silica powders as raw material. Industrial dusts were sampled in two ceramic firms, immediately after the wet mixing stage, performed with the aid of a relevant pressure. The dusts were then characterised by means of X-ray diffraction, analysis of the trace elements through chemical methods, granulometry, continuous-wave electron paramagnetic resonance (EPR) and pulsed electron spin echo envelope modulation (ESEEM) spectroscopies. The results of the characterisation point to a relevant change in the speciation of the two samples; namely, a prevailing contribution due to an inorganic radical different from that pertaining to pure quartz is pointed out. The combined interpretation of EPR and ESEEM data suggests the attribution of the main paramagnetic contribution to the A-centre in kaolinite, a constituent that is added to pure quartz at the initial stage of the ceramic production. In one of the two samples, a second weak EPR signal is attributed to the quartz's hAl species. By taking into account the relative quantities of quartz and kaolinite mixed in the two samples, and the relative abundances of the two radical species, we propose that the partial or complete suppression of the hAl species in favour of the A-centre of kaolinite has occurred. Although this change is apparently fostered by the mixture between quartz and another radical-bearing raw material, kaolinite, the suppression of the hAl centre of quartz is ascribed to the role played by the pressure and the wet environment during the industrial mixing procedure. This suppression provides a net change of radical speciation associated with quartz, when this phase is in contact with workers' respiratory system.
KUMAR, ABHISHEK; CHRISTENSEN, RYAN; GUO, MIN; CHANDRIS, PANOS; DUNCAN, WILLIAM; WU, YICONG; SANTELLA, ANTHONY; MOYLE, MARK; WINTER, PETER W.; COLÓN-RAMOS, DANIEL; BAO, ZHIRONG; SHROFF, HARI
2017-01-01
Dual-view inverted selective plane illumination microscopy (diSPIM) enables high-speed, long-term, fourdimensional (4D) imaging with isotropic spatial resolution. It is also compatible with conventional sample mounting on glass coverslips. However, broadening of the light sheet at distances far from the beam waist and sample-induced scattering degrades diSPIM contrast and optical sectioning. We describe two simple improvements that address both issues and entail no additional hardware modifications to the base diSPIM. First, we demonstrate improved diSPIM sectioning by keeping the light sheet and detection optics stationary, and scanning the sample through the stationary light sheet (rather than scanning the broadening light sheet and detection plane through the stationary sample, as in conventional diSPIM). This stage-scanning approach allows a thinner sheet to be used when imaging laterally extended samples, such as fixed microtubules or motile mitochondria in cell monolayers, and produces finer contrast than does conventional diSPIM. We also used stage-scanning diSPIM to obtain high-quality, 4D nuclear datasets derived from an uncompressed nematode embryo, and performed lineaging analysis to track 97% of cells until twitching. Second, we describe the improvement of contrast in thick, scattering specimens by synchronizing light-sheet synthesis with the rolling, electronic shutter of our scientific complementary metal-oxide-semiconductor (sCMOS) detector. This maneuver forms a virtual confocal slit in the detection path, partially removing out-of-focus light. We demonstrate the applicability of our combined stage- and slit-scanning-methods by imaging pollen grains and nuclear and neuronal structures in live nematode embryos. All acquisition and analysis code is freely available online. PMID:27638693
Pubertal stage and the prevalence of violence and social relational aggression
Hemphill, Sheryl A.; Kotevski, Aneta; Herrenkohl, Todd I.; Toumbourou, John W.; Carlin, John B.; Catalano, Richard F.; Patton, George C.
2010-01-01
Objective Violence and social relational aggression are global problems that become prominent in early adolescence. This study examines associations between pubertal stage and adolescent violent behavior and social relational aggression. Methods This paper draws on cross-sectional data from the International Youth Development Study (IYDS), which comprised two state-wide representative samples of students in grades 5, 7 and 9 (N = 5,769) in Washington State in the United States and Victoria, Australia, drawn as a 2-stage cluster sample in each state. The study used carefully matched methods to conduct a school-administered, self-report student survey measuring behavioral outcomes including past year violent behavior (measured as attacking or beating up another person) and social relational aggression (excluding peers from the group, threatening to spread lies or rumors), as well as a comprehensive range of risk and protective factors and pubertal development. Results Compared with early puberty, the odds of violent behavior were approximately three-fold higher in mid-puberty (odds ratio [OR]: 2.87; 95% confidence interval [CI]: 1.81,4.55) and late puberty (OR: 3.79; 95% CI: 2.25,6.39), after adjustment for age, gender, state, and state by gender interaction. For social relational aggression, there were weaker overall associations after adjustment but these included an interaction between pubertal stage and age, showing stronger associations with pubertal stage at younger age (p = .003; mid-puberty OR 1.78; 95% CI 1.20,2.63; late puberty OR 3.00; 95% CI 1.95,4.63. Associations between pubertal stage and violent behavior and social relational aggression remained (although the magnitude of effects was reduced), after the inclusion of social contextual mediators in the analyses. Conclusions Pubertal stage was associated with higher rates of violent behavior and social relational aggression, with the latter association seen only at younger ages. Puberty may be an important phase for interventions aimed at preventing the adolescent rise in violent and antisocial behaviors. PMID:20624807
Picking Deep Filter Responses for Fine-Grained Image Recognition (Open Access Author’s Manuscript)
2016-12-16
stages. Our method explores a unified framework based on two steps of deep filter response picking. The first picking step is to find distinctive... filters which respond to specific patterns significantly and consistently, and learn a set of part detectors via iteratively alternating between new...positive sample mining and part model retraining. The second picking step is to pool deep filter responses via spatially weighted combination of Fisher
Perinetti, G; Baccetti, T; Contardo, L; Di Lenarda, R
2011-02-01
To evaluate the gingival crevicular fluid (GCF) alkaline phosphatase (ALP) activity in growing subjects in relation to the stages of individual skeletal maturation. The Department of Biomedicine, University of Trieste. Seventy-two healthy growing subjects (45 women and 27 men; range, 7.8-17.7 years). Double-blind, prospective, cross-sectional design. Samples of GCF were collected from each subject at the mesial and distal sites of both of the central incisors, in the maxilla and mandible. Skeletal maturation phase was assessed through the cervical vertebral maturation (CVM) method. Enzymatic activity was determined spectrophotometrically. The relationship between GCF ALP activity and CVM stages was significant. In particular, a twofold peak in enzyme activity was seen at the CS3 and CS4 pubertal stages, compared to the pre-pubertal stages (CS1 and CS2) and post-pubertal stages (CS5 and CS6), at both the maxillary and mandibular sites. No differences were seen between the maxillary and mandibular sites, or between the sexes. As an adjunct to standard methods based upon radiographic parameters, the GCF ALP may be a candidate as a non-invasive clinical biomarker for the identification of the pubertal growth spurt in periodontally healthy subjects scheduled for orthodontic treatment. © 2010 John Wiley & Sons A/S.
Barni, María Florencia Silva; Gonzalez, Mariana; Miglioranza, Karina S B
2014-01-01
Persistent organic pollutants (POPs) in streamwater can sometimes exceed the guidelines values reported for biota and human protection in watersheds with intensive agriculture. Oxidative stress and cytotoxicity are some of the markers of exposure to POPs in fish. Accumulation of organochlorine pesticides (OCPs), polychlorinated biphenyls (PCBs) and polybrominated diphenyl ethers (PBDEs) as well as lipid peroxidation (LPO) was assessed in wild silverside (Odontesthes bonariensis) from maturation and pre-spawning stages sampled in a typical soybean growing area. Pollutants were quantified by gas chromatography with electron capture detection and LPO by the method of thiobarbituric acid reactive substances. Concentrations of POPs were in the following order: OCPs>PCBs>PBDEs in all organs and stages. Liver, gills and gonads had the highest OCP concentrations in both sexes and stages with a predominance of endosulfan in all samples. Matured individuals, sampled after endosulfan application period, showed higher endosulfan concentrations than pre-spawning individuals. The predominance of endosulfan sulfate could be due to direct uptake from diet and water column, as well as to the metabolism of the parent compounds in fish. The prevalence of p,p'-DDE in liver would also reflect both the direct uptake and the metabolic transformation of p,p'-DDT to p,p'-DDE by fish. The highest levels of PBDEs and PCBs were found in gills and brain of both stages of growth. The pattern BDE-47>BDE-100 in all samples corresponds to pentaBDE exposure. In the case of PCBs, penta (#101 and 110) and hexa-CB congeners (#153 and 138) dominated in the maturation stages and tri (#18) and tetra-CB (#44 and 52) in pre-spawning stages, suggesting biotransformation or preferential accumulation of heavier congeners during gonadal development. Differences in LPO levels in ovaries were associated with growth dilution and reproductive stage. Differences in LPO levels in gills were related with pesticide application periods. As a whole, endosulfan, a current-use pesticide, constituted the main pollutant found in wild silverside reflecting the intense agriculture activity in the study area. Moreover endosulfan was positively correlated with LPO. © 2013 Published by Elsevier Inc.
Soulakova, Julia N; Bright, Brianna C
2013-01-01
A large-sample problem of illustrating noninferiority of an experimental treatment over a referent treatment for binary outcomes is considered. The methods of illustrating noninferiority involve constructing the lower two-sided confidence bound for the difference between binomial proportions corresponding to the experimental and referent treatments and comparing it with the negative value of the noninferiority margin. The three considered methods, Anbar, Falk-Koch, and Reduced Falk-Koch, handle the comparison in an asymmetric way, that is, only the referent proportion out of the two, experimental and referent, is directly involved in the expression for the variance of the difference between two sample proportions. Five continuity corrections (including zero) are considered with respect to each approach. The key properties of the corresponding methods are evaluated via simulations. First, the uncorrected two-sided confidence intervals can, potentially, have smaller coverage probability than the nominal level even for moderately large sample sizes, for example, 150 per group. Next, the 15 testing methods are discussed in terms of their Type I error rate and power. In the settings with a relatively small referent proportion (about 0.4 or smaller), the Anbar approach with Yates' continuity correction is recommended for balanced designs and the Falk-Koch method with Yates' correction is recommended for unbalanced designs. For relatively moderate (about 0.6) and large (about 0.8 or greater) referent proportion, the uncorrected Reduced Falk-Koch method is recommended, although in this case, all methods tend to be over-conservative. These results are expected to be used in the design stage of a noninferiority study when asymmetric comparisons are envisioned. Copyright © 2013 John Wiley & Sons, Ltd.
Larson, Nicholas B; Fogarty, Zachary C; Larson, Melissa C; Kalli, Kimberly R; Lawrenson, Kate; Gayther, Simon; Fridley, Brooke L; Goode, Ellen L; Winham, Stacey J
2017-12-01
X-chromosome inactivation (XCI) epigenetically silences transcription of an X chromosome in females; patterns of XCI are thought to be aberrant in women's cancers, but are understudied due to statistical challenges. We develop a two-stage statistical framework to assess skewed XCI and evaluate gene-level patterns of XCI for an individual sample by integration of RNA sequence, copy number alteration, and genotype data. Our method relies on allele-specific expression (ASE) to directly measure XCI and does not rely on male samples or paired normal tissue for comparison. We model ASE using a two-component mixture of beta distributions, allowing estimation for a given sample of the degree of skewness (based on a composite likelihood ratio test) and the posterior probability that a given gene escapes XCI (using a Bayesian beta-binomial mixture model). To illustrate the utility of our approach, we applied these methods to data from tumors of ovarian cancer patients. Among 99 patients, 45 tumors were informative for analysis and showed evidence of XCI skewed toward a particular parental chromosome. For 397 X-linked genes, we observed tumor XCI patterns largely consistent with previously identified consensus states based on multiple normal tissue types. However, 37 genes differed in XCI state between ovarian tumors and the consensus state; 17 genes aberrantly escaped XCI in ovarian tumors (including many oncogenes), whereas 20 genes were unexpectedly inactivated in ovarian tumors (including many tumor suppressor genes). These results provide evidence of the importance of XCI in ovarian cancer and demonstrate the utility of our two-stage analysis. © 2017 WILEY PERIODICALS, INC.
Staging Liver Fibrosis with Statistical Observers
NASA Astrophysics Data System (ADS)
Brand, Jonathan Frieman
Chronic liver disease is a worldwide health problem, and hepatic fibrosis (HF) is one of the hallmarks of the disease. Pathology diagnosis of HF is based on textural change in the liver as a lobular collagen network that develops within portal triads. The scale of collagen lobules is characteristically on order of 1mm, which close to the resolution limit of in vivo Gd-enhanced MRI. In this work the methods to collect training and testing images for a Hotelling observer are covered. An observer based on local texture analysis is trained and tested using wet-tissue phantoms. The technique is used to optimize the MRI sequence based on task performance. The final method developed is a two stage model observer to classify fibrotic and healthy tissue in both phantoms and in vivo MRI images. The first stage observer tests for the presence of local texture. Test statistics from the first observer are used to train the second stage observer to globally sample the local observer results. A decision of the disease class is made for an entire MRI image slice using test statistics collected from the second observer. The techniques are tested on wet-tissue phantoms and in vivo clinical patient data.
El-Zein, Mariam; Conus, Florence; Benedetti, Andrea; Parent, Marie-Elise; Rousseau, Marie-Claude
2016-01-01
When using administrative databases for epidemiologic research, a subsample of subjects can be interviewed, eliciting information on undocumented confounders. This article presents a thorough investigation of the validity of a two-stage sample encompassing an assessment of nonparticipation and quantification of the extent of bias. Established through record linkage of administrative databases, the Québec Birth Cohort on Immunity and Health (n = 81,496) aims to study the association between Bacillus Calmette-Guérin vaccination and asthma. Among 76,623 subjects classified in four Bacillus Calmette-Guérin-asthma strata, a two-stage sampling strategy with a balanced design was used to randomly select individuals for interviews. We compared stratum-specific sociodemographic characteristics and healthcare utilization of stage 2 participants (n = 1,643) with those of eligible nonparticipants (n = 74,980) and nonrespondents (n = 3,157). We used logistic regression to determine whether participation varied across strata according to these characteristics. The effect of nonparticipation was described by the relative odds ratio (ROR = ORparticipants/ORsource population) for the association between sociodemographic characteristics and asthma. Parental age at childbirth, area of residence, family income, and healthcare utilization were comparable between groups. Participants were slightly more likely to be women and have a mother born in Québec. Participation did not vary across strata by sex, parental birthplace, or material and social deprivation. Estimates were not biased by nonparticipation; most RORs were below one and bias never exceeded 20%. Our analyses evaluate and provide a detailed demonstration of the validity of a two-stage sample for researchers assembling similar research infrastructures.
Li, Yongsheng; Zhang, Jinwen; Huo, Caiqin; Ding, Na; Li, Junyi; Xiao, Jun; Lin, Xiaoyu; Cai, Benzhi; Zhang, Yunpeng; Xu, Juan
2017-10-01
Advances in developmental cardiology have increased our understanding of the early aspects of heart differentiation. However, understanding noncoding RNA (ncRNA) transcription and regulation during this process remains elusive. Here, we constructed transcriptomes for both long noncoding RNAs (lncRNAs) and circular RNAs (circRNAs) in four important developmental stages ranging from early embryonic to cardiomyocyte based on high-throughput sequencing datasets, which indicate the high stage-specific expression patterns of two ncRNA types. Additionally, higher similarities of samples within each stage were found, highlighting the divergence of samples collected from distinct cardiac developmental stages. Next, we developed a method to identify numerous lncRNA and circRNA regulators whose expression was significantly stage-specific and shifted gradually and continuously during heart differentiation. We inferred that these ncRNAs are important for the stages of cardiac differentiation. Moreover, transcriptional regulation analysis revealed that the expression of stage-specific lncRNAs is controlled by known key stage-specific transcription factors (TFs). In addition, circRNAs exhibited dynamic expression patterns independent from their host genes. Functional enrichment analysis revealed that lncRNAs and circRNAs play critical roles in pathways that are activated specifically during heart differentiation. We further identified candidate TF-ncRNA-gene network modules for each differentiation stage, suggesting the dynamic organization of lncRNAs and circRNAs collectively controlled cardiac differentiation, which may cause heart-related diseases when defective. Our study provides a foundation for understanding the dynamic regulation of ncRNA transcriptomes during heart differentiation and identifies the dynamic organization of novel key lncRNAs and circRNAs to collectively control cardiac differentiation. Copyright © 2017. Published by Elsevier B.V.
Photovoltaic Enhancement with Ferroelectric HfO2Embedded in the Structure of Solar Cells
NASA Astrophysics Data System (ADS)
Eskandari, Rahmatollah; Malkinski, Leszek
Enhancing total efficiency of the solar cells is focused on the improving one or all of the three main stages of the photovoltaic effect: absorption of the light, generation of the carriers and finally separation of the carriers. Ferroelectric photovoltaic designs target the last stage with large electric forces from polarized ferroelectric films that can be larger than band gap of the material and the built-in electric fields in semiconductor bipolar junctions. In this project we have fabricated very thin ferroelectric HfO2 films ( 10nm) doped with silicon using RF sputtering method. Doped HfO2 films were capped between two TiN layers ( 20nm) and annealed at temperatures of 800ºC and 1000ºC and Si content was varied between 6-10 mol. % using different size of mounted Si chip on hafnium target. Piezoforce microscopy (PFM) method proved clear ferroelectric properties in samples with 6 mol. % of Si that were annealed at 800ºC. Ferroelectric samples were poled in opposite directions and embedded in the structure of a cell and an enhancement in photovoltaic properties were observed on the poled samples vs unpoled ones with KPFM and I-V measurements. The current work is funded by the NSF EPSCoR LA-SiGMA project under award #EPS-1003897.
Griffiths, Ronald E.; Topping, David J.; Anderson, Robert S.; Hancock, Gregory S.; Melis, Theodore S.
2014-01-01
Management of sediment in rivers downstream from dams requires knowledge of both the sediment supply and downstream sediment transport. In some dam-regulated rivers, the amount of sediment supplied by easily measured major tributaries may overwhelm the amount of sediment supplied by the more difficult to measure lesser tributaries. In this first class of rivers, managers need only know the amount of sediment supplied by these major tributaries. However, in other regulated rivers, the cumulative amount of sediment supplied by the lesser tributaries may approach the total supplied by the major tributaries. The Colorado River downstream from Glen Canyon has been hypothesized to be one such river. If this is correct, then management of sediment in the Colorado River in the part of Glen Canyon National Recreation Area downstream from the dam and in Grand Canyon National Park may require knowledge of the sediment supply from all tributaries. Although two major tributaries, the Paria and Little Colorado Rivers, are well documented as the largest two suppliers of sediment to the Colorado River downstream from Glen Canyon Dam, the contributions of sediment supplied by the ephemeral lesser tributaries of the Colorado River in the lowermost Glen Canyon, and Marble and Grand Canyons are much less constrained. Previous studies have estimated amounts of sediment supplied by these tributaries ranging from very little to almost as much as the amount supplied by the Paria River. Because none of these previous studies relied on direct measurement of sediment transport in any of the ephemeral tributaries in Glen, Marble, or Grand Canyons, there may be significant errors in the magnitudes of sediment supplies estimated during these studies. To reduce the uncertainty in the sediment supply by better constraining the sediment yield of the ephemeral lesser tributaries, the U.S. Geological Survey Grand Canyon Monitoring and Research Center established eight sediment-monitoring gaging stations beginning in 2000 on the larger of the previously ungaged tributaries of the Colorado River downstream from Glen Canyon Dam. The sediment-monitoring gaging stations consist of a downward-looking stage sensor and passive suspended-sediment samplers. Two stations are equipped with automatic pump samplers to collect suspended-sediment samples during flood events. Directly measuring discharge and collecting suspended-sediment samples in these remote ephemeral streams during significant sediment-transporting events is nearly impossible; most significant run-off events are short-duration events (lasting minutes to hours) associated with summer thunderstorms. As the remote locations and short duration of these floods make it prohibitively expensive, if not impossible, to directly measure the discharge of water or collect traditional depth-integrated suspended-sediment samples, a method of calculating sediment loads was developed that includes documentation of stream stages by field instrumentation, modeling of discharges associated with these stages, and automatic suspended-sediment measurements. The approach developed is as follows (1) survey and model flood high-water marks using a two-dimensional hydrodynamic model, (2) create a stage-discharge relation for each site by combining the modeled flood flows with the measured stage record, (3) calculate the discharge record for each site using the stage-discharge relation and the measured stage record, and (4) calculate the instantaneous and cumulative sediment loads using the discharge record and suspended-sediment concentrations measured from samples collected with passive US U-59 samplers and ISCOTM pump samplers. This paper presents the design of the gaging network and briefly describes the methods used to calculate discharge and sediment loads. The design and methods herein can easily be used at other remote locations where discharge and sediment loads are required.
Shiu, A T
1998-08-01
The study aimed to investigate the significance of sense of coherence (SOC) for the perceptions of task characteristics and for stress perceptions during interruptions of public health nurses (PHNs) with children in Hong Kong. The research design employed the experience sampling method. Convenience sampling was used to recruit 20 subjects. During stage one of the study a watch was worn that gave a signal at six random times each day for seven days to complete an experience sampling diary. PHNs on average responded to 34 signals (80%) to complete the diaries which collected data on work and family juggling, task characteristics, and their effects on mood states. At stage two respondents completed the SOC scale which measured confidence in life as comprehensible, manageable, and meaningful. Two major findings provide the focus for this paper. First, results indicate that there was positive correlation between SOC and perceived task characteristics. Second, results reveal that when interruptions occurred, PHNs with high SOC had higher positive affect and lower negative affect than PHNs with low SOC. These results suggest that SOC as a salutogenic model helps PHNs to cope with the family and work juggling as well as the occupational stress. Implications for nursing management on strengthening SOC of PHNs are discussed.
Palmer, N. D.; Langefeld, C. D.; Ziegler, J. T.; Hsu, F.; Haffner, S. M.; Fingerlin, T.; Norris, J. M.; Chen, Y. I.; Rich, S. S.; Haritunians, T.; Taylor, K. D.; Bergman, R. N.; Rotter, J. I.; Bowden, D. W.
2009-01-01
Aims/Hypothesis —The majority of type 2 diabetes Genome Wide Association Studies (GWAS) to date have been performed in European-derived populations and have identified few variants that mediate their effect through insulin resistance. The aim of this study was to evaluate two quantitative, directly assessed measures of insulin resistance (SI and DI) in Hispanic Americans using an agnostic, high-density SNP scan and validate these findings in additional samples. Methods —A two-stage GWAS was performed in IRAS-FS Hispanic-American samples. In Stage 1, 317K single nucleotide polymorphisms (SNPs) were assessed 229 DNA samples. SNPs with evidence of association with glucose homeostasis and adiposity traits were then genotyped on the entire set of Hispanic-American samples (n=1190). This report focuses on the glucose homeostasis traits: insulin sensitivity index (SI) and disposition index (DI). Results —Although evidence of association did not reach genome-wide significance (P=5×10−7), in the combined analysis SNPs had admixture-adjusted PADD=0.00010–0.0020 with 8–41% differences in genotypic means for SI and DI. Conclusions/Interpretation —Several candidate loci have been identified which are nominally associated with SI and/or DI in Hispanic Americans. Replication of these findings in independent cohorts and additional focused analysis of these loci is warranted. PMID:19902172
Carinhena, Glauber; Siqueira, Danilo Furquim; Sannomiya, Eduardo Kazuo
2014-01-01
Introduction This study was conducted with the aim of adapting the methods developed by Martins and Sakima to assess skeletal maturation by cervical vertebrae in the pubertal growth spurt (PGS) curve. It also aimed to test the reliability and agreement between those methods and the method of hand and wrist radiograph when compared two by two and all together. Methods The sample comprised 72 radiographs, with 36 lateral radiographs of the head and 36 hand-wrist radiographs of 36 subjects with Down's syndrome (DS), 13 female and 23 male, aged between 8 years and 6 months and 18 years and 7 months, with an average age of 13 years and 10 months. Results and Conclusions Results revealed that adapting the methods developed by Martins and Sakima to assess skeletal maturation by cervical vertebrae in the curve of PGS is practical and useful in determining the stage of growth and development of individuals. The stages of maturation evaluated by cervical vertebrae and ossification centers observed in radiographs of the hand and wrist were considered reliable, with excellent level of agreement between the methods by Hassel and Farman as well as Baccetti, Franchi and McNamara Jr and Martins and Sakima. Additionally, results revealed an agreement that ranged between reasonable to good for the three methods used to assess the skeletal maturation, showing statistical significance. PMID:25279522
Kanık, Emine Arzu; Temel, Gülhan Orekici; Erdoğan, Semra; Kaya, Irem Ersöz
2013-03-01
The aim of study is to introduce method of Soft Independent Modeling of Class Analogy (SIMCA), and to express whether the method is affected from the number of independent variables, the relationship between variables and sample size. Simulation study. SIMCA model is performed in two stages. In order to determine whether the method is influenced by the number of independent variables, the relationship between variables and sample size, simulations were done. Conditions in which sample sizes in both groups are equal, and where there are 30, 100 and 1000 samples; where the number of variables is 2, 3, 5, 10, 50 and 100; moreover where the relationship between variables are quite high, in medium level and quite low were mentioned. Average classification accuracy of simulation results which were carried out 1000 times for each possible condition of trial plan were given as tables. It is seen that diagnostic accuracy results increase as the number of independent variables increase. SIMCA method is a method in which the relationship between variables are quite high, the number of independent variables are many in number and where there are outlier values in the data that can be used in conditions having outlier values.
Fogt-Wyrwas, R; Jarosz, W; Mizgajska-Wiktor, H
2007-03-01
A polymerase chain reaction (PCR) technique has been used for the differentiation of T. canis and T. cati eggs isolated from soil and previously identified from microscopical observations. The method, using specific primers for the identification of the two Toxocara species, was assessed in both the field and laboratory. Successful results were obtained when only a single or large numbers of eggs were recovered from 40 g soil samples. The method is sensitive, allows analysis of material independent of the stage of egg development and can be adapted for the recovery of other species of parasites from soil.
Donath, Ernest E.
1976-01-01
A method and apparatus for removing oversized, unentrained char particles from a two-stage coal gasification process so as to prevent clogging or plugging of the communicating passage between the two gasification stages. In the first stage of the process, recycled process char passes upwardly while reacting with steam and oxygen to yield a first stage synthesis gas containing hydrogen and oxides of carbon. In the second stage, the synthesis gas passes upwardly with coal and steam which react to yield partially gasified char entrained in a second stage product gas containing methane, hydrogen, and oxides of carbon. Agglomerated char particles, which result from caking coal particles in the second stage and are too heavy to be entrained in the second stage product gas, are removed through an outlet in the bottom of the second stage, the particles being separated from smaller char particles by a counter-current of steam injected into the outlet.
Podhorniak, Lynda V; Kamel, Alaa; Rains, Diane M
2010-05-26
A rapid multiresidue method that captures residues of the insecticide formetanate hydrochloride (FHCl) in selected fruits is described. The method was used to provide residue data for dietary exposure determinations of FHCl. Using an acetonitrile extraction with a dispersive cleanup based on AOAC International method 2007.01, also known as QuEChERS, which was further modified and streamlined, thousands of samples were successfully analyzed for FHCl residues. FHCl levels were determined both by liquid chromatography-single-stage mass spectrometry (LC-MS) and ultraperformance liquid chromatography (UPLC)-tandem mass spectrometry (LC-MS/MS). The target limit of detection (LOD) and the limit of quantitation (LOQ) achieved for FHCl were 3.33 and 10 ng/g, respectively, with LC-MS and 0.1 and 0.3 ng/g, respectively, with LC-MS/MS. Recoveries at these previously unpublished levels ranged from 95 to 109%. A set of 20-40 samples can be prepared in one working day by two chemists.
Roseman, Edward F.; Kennedy, Gregory W.; Craig, Jaquelyn; Boase, James; Soper, Karen
2011-01-01
In this report we describe how we adapted two techniques for sampling lake sturgeon (Acipenser fulvescens) and other fish early life history stages to meet our research needs in the Detroit River, a deep, flowing Great Lakes connecting channel. First, we developed a buoy-less method for sampling fish eggs and spawning activity using egg mats deployed on the river bottom. The buoy-less method allowed us to fish gear in areas frequented by boaters and recreational anglers, thus eliminating surface obstructions that interfered with recreational and boating activities. The buoy-less method also reduced gear loss due to drift when masses of floating aquatic vegetation would accumulate on buoys and lines, increasing the drag on the gear and pulling it downstream. Second, we adapted a D-frame drift net system formerly employed in shallow streams to assess larval lake sturgeon dispersal for use in the deeper (>8 m) Detroit River using an anchor and buoy system.
Roseman, E.F.; Boase, J.; Kennedy, G.; Craig, J.; Soper, K.
2011-01-01
In this report we describe how we adapted two techniques for sampling lake sturgeon (Acipenser fulvescens) and other fish early life history stages to meet our research needs in the Detroit River, a deep, flowing Great Lakes connecting channel. First, we developed a buoy-less method for sampling fish eggs and spawning activity using egg mats deployed on the river bottom. The buoy-less method allowed us to fish gear in areas frequented by boaters and recreational anglers, thus eliminating surface obstructions that interfered with recreational and boating activities. The buoy-less method also reduced gear loss due to drift when masses of floating aquatic vegetation would accumulate on buoys and lines, increasing the drag on the gear and pulling it downstream. Second, we adapted a D-frame drift net system formerly employed in shallow streams to assess larval lake sturgeon dispersal for use in the deeper (>8m) Detroit River using an anchor and buoy system. ?? 2011 Blackwell Verlag, Berlin.
Compact cold stage for micro-computerized tomography imaging of chilled or frozen samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hullar, Ted; Anastasio, Cort, E-mail: canastasio@ucdavis.edu; Paige, David F.
2014-04-15
High resolution X-ray microCT (computerized tomography) can be used to image a variety of objects, including temperature-sensitive materials. In cases where the sample must be chilled or frozen to maintain sample integrity, either the microCT machine itself must be placed in a refrigerated chamber, or a relatively expensive commercial cold stage must be purchased. We describe here the design and construction of a low-cost custom cold stage suitable for use in a microCT imaging system. Our device uses a boron nitride sample holder, two-stage Peltier cooler, fan-cooled heat sink, and electronic controller to maintain sample temperatures as low as −25 °Cmore » ± 0.2 °C for the duration of a tomography acquisition. The design does not require modification to the microCT machine, and is easily installed and removed. Our custom cold stage represents a cost-effective solution for refrigerating CT samples for imaging, and is especially useful for shared equipment or machines unsuitable for cold room use.« less
Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard
2007-06-01
Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy.
Angus, Val C; Entwistle, Vikki A; Emslie, Margaret J; Walker, Kim A; Andrew, Jane E
2003-01-01
Background A survey was carried out in the Grampian region of Scotland with a random sample of 10,000 adults registered with a General Practitioner in Grampian. The study complied with new legislation requiring a two-stage approach to identify and recruit participants, and examined the implications of this for response rates, non-response bias and speed of response. Methods A two-stage survey was carried out consistent with new confidentiality guidelines. Individuals were contacted by post and asked by the Director of Public Health to consent to receive a postal or electronic questionnaire about communicating their views to the NHS. Those who consented were then sent questionnaires. Response rates at both stages were measured. Results 25% of people returned signed consent forms and were invited to complete questionnaires. Respondents at the consent stage were more likely to be female (odds ratio (OR) response rate of women compared to men = 1.5, 95% CI 1.4, 1.7), less likely to live in deprived postal areas (OR = 0.59, 95% CI 0.45, 0.78) and more likely to be older (OR for people born in 1930–39 compared to people born in 1970–79 = 2.82, 95% CI 2.36, 3.37). 80% of people who were invited to complete questionnaires returned them. Response rates were higher among older age groups. The overall response rate to the survey was 20%, relative to the original number approached for consent (1951/10000). Conclusion The requirement of a separate, prior consent stage may significantly reduce overall survey response rates and necessitate the use of substantially larger initial samples for population surveys. It may also exacerbate non-response bias with respect to demographic variables. PMID:14622444
Yang, Ting-zhong
2006-03-01
To explore the pattern of transmission of human immunodeficiency virus through risky sexual behaviors (RSB) in floating workers coming from the countryside to the cities. Data were collected anonymously through a structured questionnaire survey in 1595 men from Hangzhou and Guangzhou cities, using a multi-stage sampling method. Data from both preliminary analyses and multivariate regression analysis would show the cumulative adoption of RSB over time and the identification of factors associated with the adoption in this population from the two areas. 57.9% - 88.1% of the study samples with the pre-stage RSB (receiving shampoo, massage or leisure-seeking activities from "sexual workers") and 79.9% of those with commercial RSB were initiated during the period when they were working away from their home-towns. The highest adoption rate (15.2% - 26.8%) was happened in the third month after moving to the urban areas for pre-stage RSB, while the highest rate (14.4%) was noticed in the sixth month for the commercial ones. The transition interval between the two behaviors was around 3 months. The cumulative rate was peaked from 57.3% to 70.4% for pre-stage RSB and 48.9% for commercial RSB. The cumulative adoption curves showed that the robust increment was more pronounced in the pre-stage than in the commercial RSB. Most of the early adopters were married and holding higher hedonistic beliefs for the commercial RSB. Communication of sex information and behavioral adoption of RSB was associated with the perceived stress and hedonistic beliefs. RSB epidemics seemed to be social and group phenomena, suggesting that related social strategies should be developed in order to control the RSB in this population.
Sleep spindle detection using deep learning: A validation study based on crowdsourcing.
Dakun Tan; Rui Zhao; Jinbo Sun; Wei Qin
2015-08-01
Sleep spindles are significant transient oscillations observed on the electroencephalogram (EEG) in stage 2 of non-rapid eye movement sleep. Deep belief network (DBN) gaining great successes in images and speech is still a novel method to develop sleep spindle detection system. In this paper, crowdsourcing replacing gold standard was applied to generate three different labeled samples and constructed three classes of datasets with a combination of these samples. An F1-score measure was estimated to compare the performance of DBN to other three classifiers on classifying these samples, with the DBN obtaining an result of 92.78%. Then a comparison of two feature extraction methods based on power spectrum density was made on same dataset using DBN. In addition, the DBN trained in dataset was applied to detect sleep spindle from raw EEG recordings and performed a comparable capacity to expert group consensus.
Interim analyses in 2 x 2 crossover trials.
Cook, R J
1995-09-01
A method is presented for performing interim analyses in long term 2 x 2 crossover trials with serial patient entry. The analyses are based on a linear statistic that combines data from individuals observed for one treatment period with data from individuals observed for both periods. The coefficients in this linear combination can be chosen quite arbitrarily, but we focus on variance-based weights to maximize power for tests regarding direct treatment effects. The type I error rate of this procedure is controlled by utilizing the joint distribution of the linear statistics over analysis stages. Methods for performing power and sample size calculations are indicated. A two-stage sequential design involving simultaneous patient entry and a single between-period interim analysis is considered in detail. The power and average number of measurements required for this design are compared to those of the usual crossover trial. The results indicate that, while there is minimal loss in power relative to the usual crossover design in the absence of differential carry-over effects, the proposed design can have substantially greater power when differential carry-over effects are present. The two-stage crossover design can also lead to more economical studies in terms of the expected number of measurements required, due to the potential for early stopping. Attention is directed toward normally distributed responses.
Hyde, Embriette R.; Haarmann, Daniel P.; Lynne, Aaron M.; Bucheli, Sibyl R.; Petrosino, Joseph F.
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition. PMID:24204941
Hyde, Embriette R; Haarmann, Daniel P; Lynne, Aaron M; Bucheli, Sibyl R; Petrosino, Joseph F
2013-01-01
Human decomposition is a mosaic system with an intimate association between biotic and abiotic factors. Despite the integral role of bacteria in the decomposition process, few studies have catalogued bacterial biodiversity for terrestrial scenarios. To explore the microbiome of decomposition, two cadavers were placed at the Southeast Texas Applied Forensic Science facility and allowed to decompose under natural conditions. The bloat stage of decomposition, a stage easily identified in taphonomy and readily attributed to microbial physiology, was targeted. Each cadaver was sampled at two time points, at the onset and end of the bloat stage, from various body sites including internal locations. Bacterial samples were analyzed by pyrosequencing of the 16S rRNA gene. Our data show a shift from aerobic bacteria to anaerobic bacteria in all body sites sampled and demonstrate variation in community structure between bodies, between sample sites within a body, and between initial and end points of the bloat stage within a sample site. These data are best not viewed as points of comparison but rather additive data sets. While some species recovered are the same as those observed in culture-based studies, many are novel. Our results are preliminary and add to a larger emerging data set; a more comprehensive study is needed to further dissect the role of bacteria in human decomposition.
Rigamonti, Ivo E; Brambilla, Carla; Colleoni, Emanuele; Jermini, Mauro; Trivellone, Valeria; Baumgärtner, Johann
2016-04-01
The paper deals with the study of the spatial distribution and the design of sampling plans for estimating nymph densities of the grape leafhopper Scaphoideus titanus Ball in vine plant canopies. In a reference vineyard sampled for model parameterization, leaf samples were repeatedly taken according to a multistage, stratified, random sampling procedure, and data were subjected to an ANOVA. There were no significant differences in density neither among the strata within the vineyard nor between the two strata with basal and apical leaves. The significant differences between densities on trunk and productive shoots led to the adoption of two-stage (leaves and plants) and three-stage (leaves, shoots, and plants) sampling plans for trunk shoots- and productive shoots-inhabiting individuals, respectively. The mean crowding to mean relationship used to analyze the nymphs spatial distribution revealed aggregated distributions. In both the enumerative and the sequential enumerative sampling plans, the number of leaves of trunk shoots, and of leaves and shoots of productive shoots, was kept constant while the number of plants varied. In additional vineyards data were collected and used to test the applicability of the distribution model and the sampling plans. The tests confirmed the applicability 1) of the mean crowding to mean regression model on the plant and leaf stages for representing trunk shoot-inhabiting distributions, and on the plant, shoot, and leaf stages for productive shoot-inhabiting nymphs, 2) of the enumerative sampling plan, and 3) of the sequential enumerative sampling plan. In general, sequential enumerative sampling was more cost efficient than enumerative sampling.
ELISA for sulfonamides and its application for screening in water contamination.
Shelver, Weilin L; Shappell, Nancy W; Franek, Milan; Rubio, Fernando R
2008-08-13
Two enzyme-linked immunosorbent assays (ELISAs) were tested for their suitability for detecting sulfonamides in wastewater from various stages in wastewater treatment plants (WWTPs), the river into which the wastewater is discharged, and two swine-rearing facilities. The sulfamethoxazole ELISA cross-reacts with several compounds, achieving detection limits of <0.04 microg/L for sulfamethoxazole (SMX), sulfamethoxypyridine, sulfachloropyridine, and sulfamethoxine, whereas the sulfamethazine (SMZ) ELISA is more compound specific, with a detection limit of <0.03 microg/L. Samples from various stages of wastewater purifications gave 0.6-3.1 microg/L by SMX-ELISA, whereas river samples were approximately 10-fold lower, ranging from below detection to 0.09 microg/L. Swine wastewater samples analyzed by the SMX-ELISA were either at or near detectable limits from one facility, whereas the other facility had concentrations of approximately 0.5 microg/L, although LC-MS/MS did not confirm the presence of SMX. Sulfamethazine ELISA detected no SMZ in either WWTP or river samples. In contrast, wastewater samples from swine facilities analyzed by SMZ-ELISA were found to contain approximately 30 microg/L [piglet (50-100 lb) wastewater] and approximately 7 microg/L (market-weight hog wastewater). Sulfamethazine ELISA analyses of wastewater from another swine facility found concentrations to be near or below detection limits. A solid phase extraction method was used to isolate and concentrate sulfonamides from water samples prior to LC-MS/MS multiresidue confirmatory analysis. The recoveries at 1 microg/L fortification ranged from 42 +/- 4% for SMZ to 88 +/- 4% for SMX ( n = 6). The ELISA results in the WWTPs were confirmed by LC-MS/MS, as sulfonamide multiresidue confirmatory analysis identified SMX, sulfapyridine, and sulfasalazine to be present in the wastewater. Sulfamethazine presence at one swine-rearing facility was also confirmed by LC-MS/MS, demonstrating the usefulness of the ELISA technique as a rapid and high-throughput screening method.
Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey
2009-01-01
Objective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Results: Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future. PMID:21197345
Smart, Adam S; Tingley, Reid; Weeks, Andrew R; van Rooyen, Anthony R; McCarthy, Michael A
2015-10-01
Effective management of alien species requires detecting populations in the early stages of invasion. Environmental DNA (eDNA) sampling can detect aquatic species at relatively low densities, but few studies have directly compared detection probabilities of eDNA sampling with those of traditional sampling methods. We compare the ability of a traditional sampling technique (bottle trapping) and eDNA to detect a recently established invader, the smooth newt Lissotriton vulgaris vulgaris, at seven field sites in Melbourne, Australia. Over a four-month period, per-trap detection probabilities ranged from 0.01 to 0.26 among sites where L. v. vulgaris was detected, whereas per-sample eDNA estimates were much higher (0.29-1.0). Detection probabilities of both methods varied temporally (across days and months), but temporal variation appeared to be uncorrelated between methods. Only estimates of spatial variation were strongly correlated across the two sampling techniques. Environmental variables (water depth, rainfall, ambient temperature) were not clearly correlated with detection probabilities estimated via trapping, whereas eDNA detection probabilities were negatively correlated with water depth, possibly reflecting higher eDNA concentrations at lower water levels. Our findings demonstrate that eDNA sampling can be an order of magnitude more sensitive than traditional methods, and illustrate that traditional- and eDNA-based surveys can provide independent information on species distributions when occupancy surveys are conducted over short timescales.
Fernández del Río, R.; O'Hara, M.E.; Holt, A.; Pemberton, P.; Shah, T.; Whitehouse, T.; Mayhew, C.A.
2015-01-01
Background The burden of liver disease in the UK has risen dramatically and there is a need for improved diagnostics. Aims To determine which breath volatiles are associated with the cirrhotic liver and hence diagnostically useful. Methods A two-stage biomarker discovery procedure was used. Alveolar breath samples of 31 patients with cirrhosis and 30 healthy controls were mass spectrometrically analysed and compared (stage 1). 12 of these patients had their breath analysed after liver transplant (stage 2). Five patients were followed longitudinally as in-patients in the post-transplant period. Results Seven volatiles were elevated in the breath of patients versus controls. Of these, five showed statistically significant decrease post-transplant: limonene, methanol, 2-pentanone, 2-butanone and carbon disulfide. On an individual basis limonene has the best diagnostic capability (the area under a receiver operating characteristic curve (AUROC) is 0.91), but this is improved by combining methanol, 2-pentanone and limonene (AUROC curve 0.95). Following transplant, limonene shows wash-out characteristics. Conclusions Limonene, methanol and 2-pentanone are breath markers for a cirrhotic liver. This study raises the potential to investigate these volatiles as markers for early-stage liver disease. By monitoring the wash-out of limonene following transplant, graft liver function can be non-invasively assessed. PMID:26501124
Radiation methods for demercaptanization and desulfurization of oil products
NASA Astrophysics Data System (ADS)
Zaykina, R. F.; Zaykin, Yu. A.; Mamonova, T. B.; Nadirov, N. K.
2002-03-01
A two-stage method for the desulfurization of oil is presented. The first stage strongly oxidizes sulfuric material to do away with its chemical aggressiveness and promote its removal. Desulfurization of the overall product is reached at the second stage by means of conventional methods.
Health technology assessment of medical devices: a survey of non-European union agencies.
Ciani, Oriana; Wilcher, Britni; Blankart, Carl Rudolf; Hatz, Maximilian; Rupel, Valentina Prevolnik; Erker, Renata Slabe; Varabyova, Yauheniya; Taylor, Rod S
2015-01-01
The aim of this study was to review and compare current health technology assessment (HTA) activities for medical devices across non-European Union HTA agencies. HTA activities for medical devices were evaluated from three perspectives: organizational structure, processes, and methods. Agencies were primarily selected upon membership of existing HTA networks. The data collection was performed in two stages: stage 1-agency Web-site assessment using a standardized questionnaire, followed by review and validation of the collected data by a representative of the agency; and stage 2-semi-structured telephone interviews with key informants of a sub-sample of agencies. In total, thirty-six HTA agencies across twenty non-EU countries assessing medical devices were included. Twenty-seven of thirty-six (75 percent) agencies were judged at stage 1 to have adopted HTA-specific approaches for medical devices (MD-specific agencies) that were largely organizational or procedural. There appeared to be few differences in the organization, process and methods between MD-specific and non-MD-specific agencies. Although the majority (69 percent) of both categories of agency had specific methods guidance or policy for evidence submission, only one MD-specific agency had developed methodological guidelines specific to medical devices. In stage 2, many MD-specific agencies cited insufficient resources (budget, skilled employees), lack of coordination (between regulator and reimbursement bodies), and the inability to generalize findings from evidence synthesis to be key challenges in the HTA of medical devices. The lack of evidence for differentiation in scientific methods for HTA of devices raises the question of whether HTA needs to develop new methods for medical devices but rather adapt existing methodological approaches. In contrast, organizational and/or procedural adaptation of existing HTA agency frameworks to accommodate medical devices appear relatively commonplace.
Eichelberger, Jennifer S.; Braaten, P. J.; Fuller, D. B.; Krampe, Matthew S.; Heist, Edward J.
2014-01-01
Spawning of the federally endangered Pallid Sturgeon Scaphirhynchus albus is known to occur in the upper Missouri River basin, but progeny from natural reproductive events have not been observed and recruitment to juvenile or adult life stages has not been documented in recent decades. Identification of Pallid Sturgeon progeny is confounded by the fact that Shovelnose Sturgeon S. platorynchus occurs throughout the entire range of Pallid Sturgeon and the two species are essentially indistinguishable (morphometrically and meristically) during early life stages. Moreover, free embryos of sympatric Paddlefish Polyodon spathula are very similar to the two sturgeon species. In this study, three single-nucleotide polymorphism (SNP) assays were employed to screen acipenseriform free embryos and larvae collected from the upper Missouri River basin in 2011, 2012, and 2013. A mitochondrial DNA SNP discriminates Paddlefish from sturgeon, and specific multilocus genotypes at two nuclear DNA SNPs occurred in 98.9% of wild adult Pallid Sturgeon but only in 3% of Shovelnose Sturgeon sampled in the upper Missouri River. Individuals identified as potential Pallid Sturgeon based on SNP genotypes were further analyzed at 19 microsatellite loci for species discrimination. Out of 1,423 free embryos collected over 3 years of sampling, 971 Paddlefish, 446 Shovelnose Sturgeon, and 6 Pallid Sturgeon were identified. Additionally, 249 Scaphirhynchus spp. benthic larvae were screened, but no Pallid Sturgeon were detected. These SNP markers provide an efficient method of screening acipenseriform early life stages for the presence of Pallid Sturgeon in the Missouri River basin. Detection of wild Pallid Sturgeon free embryos in the upper Missouri and Yellowstone rivers supports the hypothesis that the failure of wild Pallid Sturgeon to recruit to the juvenile life stage in the upper Missouri River basin is caused by early life stage mortality rather than by lack of successful spawning.
Morton, David A.; Pippitt, Karly; Lamb, Sara; Colbert-Getz, Jorie M.
2016-01-01
Problem Effectively solving problems as a team under stressful conditions is central to medical practice; however, because summative examinations in medical education must test individual competence, they are typically solitary assessments. Approach Using two-stage examinations, in which students first answer questions individually (Stage 1) and then discuss them in teams prior to resubmitting their answers (Stage 2), is one method for rectifying this discordance. On the basis of principles of social constructivism, the authors hypothesized that two-stage examinations would lead to better retention of, specifically, items answered incorrectly at Stage 1. In fall 2014, they divided 104 first-year medical students into two groups of 52 students. Groups alternated each week between taking one- and two-stage examinations such that each student completed 6 one-stage and 6 two-stage examinations. The authors reassessed 61 concepts on a final examination and, using the Wilcoxon signed ranked tests, compared performance for all concepts and for just those students initially missed, between Stages 1 and 2. Outcomes Final examination performance on all previously assessed concepts was not significantly different between the one-and two-stage conditions (P = .77); however, performance on only concepts that students initially answered incorrectly on a prior examination improved by 12% for the two-stage condition relative to the one-stage condition (P = .02, r = 0.17). Next Steps Team assessment may be most useful for assessing concepts students find difficult, as opposed to all content. More research is needed to determine whether these results apply to all medical school topics and student cohorts. PMID:27049544
Automatic detection method for mura defects on display film surface using modified Weber's law
NASA Astrophysics Data System (ADS)
Kim, Myung-Muk; Lee, Seung-Ho
2014-07-01
We propose a method that automatically detects mura defects on display film surfaces using a modified version of Weber's law. The proposed method detects mura defects regardless of their properties and shapes by identifying regions perceived by human vision as mura using the brightness of pixel and image distribution ratio of mura in an image histogram. The proposed detection method comprises five stages. In the first stage, the display film surface image is acquired and a gray-level shift performed. In the second and third stages, the image histogram is acquired and analyzed, respectively. In the fourth stage, the mura range is acquired. This is followed by postprocessing in the fifth stage. Evaluations of the proposed method conducted using 200 display film mura image samples indicate a maximum detection rate of ˜95.5%. Further, the results of application of the Semu index for luminance mura in flat panel display (FPD) image quality inspection indicate that the proposed method is more reliable than a popular conventional method.
Impact of Life-Cycle Stage and Gender on the Ability to Balance Work and Family Responsibilities.
ERIC Educational Resources Information Center
Higgins, Christopher; And Others
1994-01-01
Examined impact of gender and life-cycle stage on three components of work-family conflict using sample of 3,616 respondents. For men, levels of work-family conflict were moderately lower in each successive life-cycle stage. For women, levels were similar in two early life-cycle stages but were significantly lower in later life-cycle stage.…
Seguin, Maureen; Dodds, Catherine; Mugweni, Esther; McDaid, Lisa; Flowers, Paul; Wayal, Sonali; Zomer, Ella; Weatherburn, Peter; Fakoya, Ibidun; Hartney, Thomas; McDonagh, Lorraine; Hunter, Rachael; Young, Ingrid; Khan, Shabana; Freemantle, Nick; Chwaula, Jabulani; Sachikonye, Memory; Anderson, Jane; Singh, Surinder; Nastouli, Eleni; Rait, Greta; Burns, Fiona
2018-04-01
Timely diagnosis of human immunodeficiency virus (HIV) enables access to antiretroviral treatment, which reduces mortality, morbidity and further transmission in people living with HIV. In the UK, late diagnosis among black African people persists. Novel methods to enhance HIV testing in this population are needed. To develop a self-sampling kit (SSK) intervention to increase HIV testing among black Africans, using existing community and health-care settings (stage 1) and to assess the feasibility for a Phase III evaluation (stage 2). A two-stage, mixed-methods design. Stage 1 involved a systematic literature review, focus groups and interviews with key stakeholders and black Africans. Data obtained provided the theoretical base for intervention development and operationalisation. Stage 2 was a prospective, non-randomised study of a provider-initiated, HIV SSK distribution intervention targeted at black Africans. The intervention was assessed for cost-effectiveness. A process evaluation explored feasibility, acceptability and fidelity. Twelve general practices and three community settings in London. HIV SSK return rate. Stage 1 - the systematic review revealed support for HIV SSKs, but with scant evidence on their use and clinical effectiveness among black Africans. Although the qualitative findings supported SSK distribution in settings already used by black Africans, concerns were raised about the complexity of the SSK and the acceptability of targeting. These findings were used to develop a theoretically informed intervention. Stage 2 - of the 349 eligible people approached, 125 (35.8%) agreed to participate. Data from 119 were included in the analysis; 54.5% (65/119) of those who took a kit returned a sample; 83.1% of tests returned were HIV negative; and 16.9% were not processed, because of insufficient samples. Process evaluation showed the time pressures of the research process to be a significant barrier to feasibility. Other major barriers were difficulties with the SSK itself and ethnic targeting in general practice settings. The convenience and privacy associated with the SSK were described as beneficial aspects, and those who used the kit mostly found the intervention to be acceptable. Research governance delays prevented implementation in Glasgow. Owing to the study failing to recruit adequate numbers (the intended sample was 1200 participants), we were unable to evaluate the clinical effectiveness of SSKs in increasing HIV testing in black African people. No samples were reactive, so we were unable to assess pathways to confirmatory testing and linkage to care. Our findings indicate that, although aspects of the intervention were acceptable, ethnic targeting and the SSK itself were problematic, and scale-up of the intervention to a Phase III trial was not feasible. The preliminary economic model suggests that, for the acceptance rate and test return seen in the trial, the SSK is potentially a cost-effective way to identify new infections of HIV. Sexual and public health services are increasingly utilising self-sampling technologies. However, alternative, user-friendly SSKs that meet user and provider preferences and UK regulatory requirements are needed, and additional research is required to understand clinical effectiveness and cost-effectiveness for black African communities. This study is registered as PROSPERO CRD42014010698 and Integrated Research Application System project identification 184223. The National Institute for Health Research Health Technology Assessment programme and the BHA for Equality in Health and Social Care.
D'Andrea, G; Capalbo, G; Volpe, M; Marchetti, M; Vicentini, F; Capelli, G; Cambieri, A; Cicchetti, A; Ricciardi, G; Catananti, C
2006-01-01
Our main purpose was to evaluate the organizational appropriateness of admissions made in a university hospital, by comparing two iso-gravity classification systems, APR-DRG and Disease Staging, with the Italian version of AEP (PRUO). Our analysis focused on admissions made in 2001, related to specific Diagnosis Related Groups (DRGs), which, according an Italian Law, would be considered at high risk of inappropriateness, if treated as ordinary admissions. The results obtained by using the 2 classification systems did not show statistically significant differences with respect to the total number of admissions. On the other hand, some DRGs showed statistically significant differences due to different algorithms of attribution of the severity levels used by the two systems. For almost all of the DRGs studied, the AEP-based analysis of a sample of medical records showed an higher number of inappropriate admissions in comparison with the number expected by iso-gravity classification methods. The difference is possibly due to the percentage limits of tolerability fixed by the Law for each DRG. Therefore, the authors suggest an integrated use of the two methods to evaluate organizational appropriateness of hospital admissions.
Time-resolved light emission of a, c, and r-cut sapphires shock-compressed to 65 GPa
NASA Astrophysics Data System (ADS)
Liu, Q. C.; Zhou, X. M.
2018-04-01
To investigate light emission and dynamic deformation behaviors, sapphire (single crystal Al2O3) samples with three crystallographic orientations (a, c, and r-cut) were shock-compressed by the planar impact method, with final stress ranges from 47 to 65 GPa. Emission radiance and velocity versus time profiles were simultaneously measured with a fast pyrometer and a Doppler pin system in each experiment. Wave profile results show anisotropic elastic-plastic transitions, which confirm the literature observations. Under final shock stress of about 52 GPa, lower emission intensity is observed in the r-cut sample, in agreement with the previous report in the literature. When final shock stress increases to 57 GPa and 65 GPa, spectral radiance histories of the r-cut show two stages of distinct features. In the first stage, the emission intensity of r-cut is lower than those of the other two, which agrees with the previous report in the literature. In the second stage, spectral radiance of r-cut increases with time at much higher rate and it finally peaks over those of the a and c-cut. These observations (conversion of intensified emission in the r-cut) may indicate activation of a second slip system and formation of shear bands which are discussed with the resolved shear stress calculations for the slip systems in each of the three cuts under shock compression.
Carinhena, Glauber; Siqueira, Danilo Furquim; Sannomiya, Eduardo Kazuo
2014-01-01
This study was conducted with the aim of adapting the methods developed by Martins and Sakima to assess skeletal maturation by cervical vertebrae in the pubertal growth spurt (PGS) curve. It also aimed to test the reliability and agreement between those methods and the method of hand and wrist radiograph when compared two by two and all together. The sample comprised 72 radiographs, with 36 lateral radiographs of the head and 36 hand-wrist radiographs of 36 subjects with Down's syndrome (DS), 13 female and 23 male, aged between 8 years and 6 months and 18 years and 7 months, with an average age of 13 years and 10 months. Results revealed that adapting the methods developed by Martins and Sakima to assess skeletal maturation by cervical vertebrae in the PGS curve is practical and useful in determining the stage of growth and development of individuals. The stages of maturation evaluated by cervical vertebrae and ossification centers observed in radiographs of the hand and wrist were considered reliable, with excellent level of agreement between the methods by Hassel and Farman as well as Baccetti, Franchi and McNamara Jr and Martins and Sakima. Additionally, results revealed an agreement that ranged between reasonable to good for the three methods used to assess the skeletal maturation, showing statistical significance.
The measurement of an aspherical mirror by three-dimensional nanoprofiler
NASA Astrophysics Data System (ADS)
Tokuta, Yusuke; Okita, Kenya; Okuda, Kohei; Kitayama, Takao; Nakano, Motohiro; Nakatani, Shun; Kudo, Ryota; Yamamura, Kazuya; Endo, Katsuyoshi
2015-09-01
Aspherical optical elements with high accuracy are important in several fields such as third-generation synchrotron radiation and extreme-ultraviolet lithography. Then the demand of measurement method for aspherical or free-form surface with nanometer resolution is rising. Our purpose is to develop a non-contact profiler to measure free-form surfaces directly with repeatability of figure error of less than 1 nm PV. To achieve this purpose we have developed three-dimensional Nanoprofiler which traces normal vectors of sample surface. The measurement principle is based on the straightness of LASER light and the accuracy of a rotational goniometer. This machine consists of four rotational stages, one translational stage and optical head which has the quadrant photodiode (QPD) and LASER head at optically equal position. In this measurement method, we conform the incident light beam to reflect the beam by controlling five stages and determine the normal vectors and the coordinates of the surface from signal of goniometers, translational stage and QPD. We can obtain three-dimensional figure from the normal vectors and the coordinates by a reconstruction algorithm. To evaluate performance of this machine we measure a concave aspherical mirror ten times. From ten results we calculate measurement repeatability, and we evaluate measurement uncertainty to compare the result with that measured by an interferometer. In consequence, the repeatability of measurement was 2.90 nm (σ) and the difference between the two profiles was +/-20 nm. We conclude that the two profiles was correspondent considering systematic errors of each machine.
Zooplankton community analysis in the Changjiang River estuary by single-gene-targeted metagenomics
NASA Astrophysics Data System (ADS)
Cheng, Fangping; Wang, Minxiao; Li, Chaolun; Sun, Song
2014-07-01
DNA barcoding provides accurate identification of zooplankton species through all life stages. Single-gene-targeted metagenomic analysis based on DNA barcode databases can facilitate longterm monitoring of zooplankton communities. With the help of the available zooplankton databases, the zooplankton community of the Changjiang (Yangtze) River estuary was studied using a single-gene-targeted metagenomic method to estimate the species richness of this community. A total of 856 mitochondrial cytochrome oxidase subunit 1 (cox1) gene sequences were determined. The environmental barcodes were clustered into 70 molecular operational taxonomic units (MOTUs). Forty-two MOTUs matched barcoded marine organisms with more than 90% similarity and were assigned to either the species (similarity>96%) or genus level (similarity<96%). Sibling species could also be distinguished. Many species that were overlooked by morphological methods were identified by molecular methods, especially gelatinous zooplankton and merozooplankton that were likely sampled at different life history phases. Zooplankton community structures differed significantly among all of the samples. The MOTU spatial distributions were influenced by the ecological habits of the corresponding species. In conclusion, single-gene-targeted metagenomic analysis is a useful tool for zooplankton studies, with which specimens from all life history stages can be identified quickly and effectively with a comprehensive database.
Sample Selection for Training Cascade Detectors.
Vállez, Noelia; Deniz, Oscar; Bueno, Gloria
2015-01-01
Automatic detection systems usually require large and representative training datasets in order to obtain good detection and false positive rates. Training datasets are such that the positive set has few samples and/or the negative set should represent anything except the object of interest. In this respect, the negative set typically contains orders of magnitude more images than the positive set. However, imbalanced training databases lead to biased classifiers. In this paper, we focus our attention on a negative sample selection method to properly balance the training data for cascade detectors. The method is based on the selection of the most informative false positive samples generated in one stage to feed the next stage. The results show that the proposed cascade detector with sample selection obtains on average better partial AUC and smaller standard deviation than the other compared cascade detectors.
SD-SEM: sparse-dense correspondence for 3D reconstruction of microscopic samples.
Baghaie, Ahmadreza; Tafti, Ahmad P; Owen, Heather A; D'Souza, Roshan M; Yu, Zeyun
2017-06-01
Scanning electron microscopy (SEM) imaging has been a principal component of many studies in biomedical, mechanical, and materials sciences since its emergence. Despite the high resolution of captured images, they remain two-dimensional (2D). In this work, a novel framework using sparse-dense correspondence is introduced and investigated for 3D reconstruction of stereo SEM images. SEM micrographs from microscopic samples are captured by tilting the specimen stage by a known angle. The pair of SEM micrographs is then rectified using sparse scale invariant feature transform (SIFT) features/descriptors and a contrario RANSAC for matching outlier removal to ensure a gross horizontal displacement between corresponding points. This is followed by dense correspondence estimation using dense SIFT descriptors and employing a factor graph representation of the energy minimization functional and loopy belief propagation (LBP) as means of optimization. Given the pixel-by-pixel correspondence and the tilt angle of the specimen stage during the acquisition of micrographs, depth can be recovered. Extensive tests reveal the strength of the proposed method for high-quality reconstruction of microscopic samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Three-beam interferogram analysis method for surface flatness testing of glass plates and wedges
NASA Astrophysics Data System (ADS)
Sunderland, Zofia; Patorski, Krzysztof
2015-09-01
When testing transparent plates with high quality flat surfaces and a small angle between them the three-beam interference phenomenon is observed. Since the reference beam and the object beams reflected from both the front and back surface of a sample are detected, the recorded intensity distribution may be regarded as a sum of three fringe patterns. Images of that type cannot be succesfully analyzed with standard interferogram analysis methods. They contain, however, useful information on the tested plate surface flatness and its optical thickness variations. Several methods were elaborated to decode the plate parameters. Our technique represents a competitive solution which allows for retrieval of phase components of the three-beam interferogram. It requires recording two images: a three-beam interferogram and the two-beam one with the reference beam blocked. Mutually subtracting these images leads to the intensity distribution which, under some assumptions, provides access to the two component fringe sets which encode surfaces flatness. At various stages of processing we take advantage of nonlinear operations as well as single-frame interferogram analysis methods. Two-dimensional continuous wavelet transform (2D CWT) is used to separate a particular fringe family from the overall interferogram intensity distribution as well as to estimate the phase distribution from a pattern. We distinguish two processing paths depending on the relative density of fringe sets which is connected with geometry of a sample and optical setup. The proposed method is tested on simulated data.
Image parameters for maturity determination of a composted material containing sewage sludge
NASA Astrophysics Data System (ADS)
Kujawa, S.; Nowakowski, K.; Tomczak, R. J.; Boniecki, P.; Dach, J.
2013-07-01
Composting is one of the best methods for management of sewage sludge. In a reasonably conducted composting process it is important to early identify the moment in which a material reaches the young compost stage. The objective of this study was to determine parameters contained in images of composted material's samples that can be used for evaluation of the degree of compost maturity. The study focused on two types of compost: containing sewage sludge with corn straw and sewage sludge with rapeseed straw. The photographing of the samples was carried out on a prepared stand for the image acquisition using VIS, UV-A and mixed (VIS + UV-A) light. In the case of UV-A light, three values of the exposure time were assumed. The values of 46 parameters were estimated for each of the images extracted from the photographs of the composted material's samples. Exemplary averaged values of selected parameters obtained from the images of the composted material in the following sampling days were presented. All of the parameters obtained from the composted material's images are the basis for preparation of training, validation and test data sets necessary in development of neural models for classification of the young compost stage.
Use of polymerase chain reaction in human African trypanosomiasis stage determination and follow-up.
Truc, P.; Jamonneau, V.; Cuny, G.; Frézil, J. L.
1999-01-01
Stage determination of human African trypanosomiasis is based on the detection of parasites and measurements of biological changes in the cerebrospinal fluid (CSF) (concentration of white blood cells > 5 cells per mm3 and increased total protein levels). The patient is treated accordingly. Demonstration of the absence or presence of trypanosomes by the double centrifugation technique is still the only test available to clinicians for assessing treatment success. In this study, however, we evaluate the polymerase chain reaction (PCR) as a tool for assessing the disease stage of trypanosomiasis and for determining whether treatment has been successful. All 15 study patients considered to be in the advanced stage of the disease were PCR positive; however, trypanosomes were demonstrated by double centrifugation in only 11 patients. Of the five remaining patients, who were considered to be in the early stage, PCR and double centrifugation were negative. Following treatment, 13 of the 15 second-stage patients were found to be negative for the disease in at least two samples by PCR and double centrifugation. Two others were still positive by PCR immediately and one month after the treatment. Trypanosome DNA detection using PCR suggested that the two positive patients were not cured but that their possible relapse could not be identified by a search for parasites using the double centrifugation technique. Further evaluation of the PCR method is required, in particular to determine whether PCR assays could be used in studies on patients who fail to respond to melarsoprol, as observed in several foci. PMID:10534898
Figueiredo, Viviane Rossi; Cardoso, Paulo Francisco Guerreiro; Jacomelli, Márcia; Demarzo, Sérgio Eduardo; Palomino, Addy Lidvina Mejia; Rodrigues, Ascédio José; Terra, Ricardo Mingarini; Pego-Fernandes, Paulo Manoel; Carvalho, Carlos Roberto Ribeiro
2015-01-01
Objective: Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a minimally invasive, safe and accurate method for collecting samples from mediastinal and hilar lymph nodes. This study focused on the initial results obtained with EBUS-TBNA for lung cancer and lymph node staging at three teaching hospitals in Brazil. Methods: This was a retrospective analysis of patients diagnosed with lung cancer and submitted to EBUS-TBNA for mediastinal lymph node staging. The EBUS-TBNA procedures, which involved the use of an EBUS scope, an ultrasound processor, and a compatible, disposable 22 G needle, were performed while the patients were under general anesthesia. Results: Between January of 2011 and January of 2014, 149 patients underwent EBUS-TBNA for lymph node staging. The mean age was 66 ± 12 years, and 58% were male. A total of 407 lymph nodes were sampled by EBUS-TBNA. The most common types of lung neoplasm were adenocarcinoma (in 67%) and squamous cell carcinoma (in 24%). For lung cancer staging, EBUS-TBNA was found to have a sensitivity of 96%, a specificity of 100%, and a negative predictive value of 85%. Conclusions: We found EBUS-TBNA to be a safe and accurate method for lymph node staging in lung cancer patients. PMID:25750671
NASA Astrophysics Data System (ADS)
Moler, Perry J.
The purpose of this study was to understand what perceptions junior and senior engineering & technology students have about change, change readiness, and selected attributes, skills, and abilities. The selected attributes, skills, and abilities for this study were lifelong learning, leadership, and self-efficacy. The business environment of today is dynamic, with any number of internal and external events requiring an organization to adapt through the process of organizational development. Organizational developments affect businesses as a whole, but these developments are more evident in fields related to engineering and technology. Which require employees working through such developments be flexible and adaptable to a new professional environment. This study was an Explanatory Sequential Mixed Methods design, with Stage One being an online survey that collected individuals' perceptions of change, change readiness, and associated attributes, skills, and abilities. Stage Two was a face-to-face interview with a random sample of individuals who agreed to be interviewed in Stage One. This process was done to understand why students' perceptions are what they are. By using a mixed-method study, a more complete understanding of the current perceptions of students was developed, thus allowing external stakeholders' such as Human Resource managers more insight into the individuals they seek to recruit. The results from Stage One, one sample T-test with a predicted mean of 3.000 for this study indicated that engineering & technology students have a positive perceptions of Change Mean = 3.7024; Change Readiness Mean = 3.9313; Lifelong Learning Mean = 4.571; Leadership = 4.036; and Self-Efficacy Mean = 4.321. A One-way ANOVA was also conducted to understand the differences between traditional and non-traditional student regarding change and change readiness. The results of the ANOVA test indicated there were no significant differences between these two groups. The results from Stage Two showed that students perceived change as both positive and negative. This perception stems from their life experiences rather than from educational or professional experiences. The same can be said for the concepts of change readiness, lifelong learning, leadership, and self-efficacy. This indicates that engineering & technology programs should implement these concepts into their curriculum to better prepare engineering & technology students to enter into professional careers.
Magnetic fingerprint of the sediment load in a meander bend section of the Seine River (France)
NASA Astrophysics Data System (ADS)
Kayvantash, D.; Cojan, I.; Kissel, C.; Franke, C.
2017-06-01
This study aims to evaluate the potential of magnetic methods to determine the composition of the sediment load in a cross section of an unmanaged meander in the upstream stretch of the Seine River (Marnay-sur-Seine). Suspended particulate matter (SPM) was collected based on a regular sampling scheme along a cross section of the river, at two different depth levels: during a low-water stage (May 2014) and a high-water stage (February 2015). Riverbed sediments (RBS) were collected during the low-water stage and supplementary samples were taken from the outer and inner banks. Magnetic properties of the dry bulk SPM and sieved RBS and bank sediments were analysed. After characterizing the main magnetic carrier as magnetite, hysteresis parameters were measured, giving access to the grain size and the concentration of these magnetite particles. The results combined with sedimentary grain size data were compared to the three-dimensional velocity profile of the river flow. In the RBS where the magnetic grain size is rather uniform, the concentration of magnetite is inversely proportional to the mean grain size of the total sediment indicating that magnetite is strongly associated with the fine sedimentary fraction. The same pattern is observed in the samples from the outer and inner banks. During the low-water stage, the uniformly fine SPM grain size distribution characterizes the wash load. The magnetic fraction is also relatively fine (within the pseudo single domain range) with concentration similar to that of the fine RBS fraction. During the high-water stage, SPM samples correspond to mixtures of wash load and resuspended sediment from the bedload and riverbanks. Here, the grain size distribution is heterogeneous across the section showing coarser particles compared to those in the low-water stage and more varying magnetite concentrations while the magnetic grain size is like that of the low-water stage. The magnetite concentration in the high-water SPM can be modelled based on a mixing of the magnetite concentrations of the different grain size fractions, thus quantifying the impact of resuspension in the cross section.
Evaluation of portable air samplers for monitoring airborne culturable bacteria
NASA Technical Reports Server (NTRS)
Mehta, S. K.; Bell-Robinson, D. M.; Groves, T. O.; Stetzenbach, L. D.; Pierson, D. L.
2000-01-01
Airborne culturable bacteria were monitored at five locations (three in an office/laboratory building and two in a private residence) in a series of experiments designed to compare the efficiency of four air samplers: the Andersen two-stage, Burkard portable, RCS Plus, and SAS Super 90 samplers. A total of 280 samples was collected. The four samplers were operated simultaneously, each sampling 100 L of air with collection on trypticase soy agar. The data were corrected by applying positive hole conversion factors for the Burkard portable, Andersen two-stage, and SAS Super 90 air samplers, and were expressed as log10 values prior to statistical analysis by analysis of variance. The Burkard portable air sampler retrieved the highest number of airborne culturable bacteria at four of the five sampling sites, followed by the SAS Super 90 and the Andersen two-stage impactor. The number of bacteria retrieved by the RCS Plus was significantly less than those retrieved by the other samplers. Among the predominant bacterial genera retrieved by all samplers were Staphylococcus, Bacillus, Corynebacterium, Micrococcus, and Streptococcus.
Pérez Cid, B; Fernández Alborés, A; Fernández Gómez, E; Faliqé López, E
2001-08-01
The conventional three-stage BCR sequential extraction method was employed for the fractionation of heavy metals in sewage sludge samples from an urban wastewater treatment plant and from an olive oil factory. The results obtained for Cu, Cr, Ni, Pb and Zn in these samples were compared with those attained by a simplified extraction procedure based on microwave single extractions and using the same reagents as employed in each individual BCR fraction. The microwave operating conditions in the single extractions (heating time and power) were optimized for all the metals studied in order to achieve an extraction efficiency similar to that of the conventional BCR procedure. The measurement of metals in the extracts was carried out by flame atomic absorption spectrometry. The results obtained in the first and third fractions by the proposed procedure were, for all metals, in good agreement with those obtained using the BCR sequential method. Although in the reducible fraction the extraction efficiency of the accelerated procedure was inferior to that of the conventional method, the overall metals leached by both microwave single and sequential extractions were basically the same (recoveries between 90.09 and 103.7%), except for Zn in urban sewage sludges where an extraction efficiency of 87% was achieved. Chemometric analysis showed a good correlation between the results given by the two extraction methodologies compared. The application of the proposed approach to a certified reference material (CRM-601) also provided satisfactory results in the first and third fractions, as it was observed for the sludge samples analysed.
Detecting a Weak Association by Testing its Multiple Perturbations: a Data Mining Approach
NASA Astrophysics Data System (ADS)
Lo, Min-Tzu; Lee, Wen-Chung
2014-05-01
Many risk factors/interventions in epidemiologic/biomedical studies are of minuscule effects. To detect such weak associations, one needs a study with a very large sample size (the number of subjects, n). The n of a study can be increased but unfortunately only to an extent. Here, we propose a novel method which hinges on increasing sample size in a different direction-the total number of variables (p). We construct a p-based `multiple perturbation test', and conduct power calculations and computer simulations to show that it can achieve a very high power to detect weak associations when p can be made very large. As a demonstration, we apply the method to analyze a genome-wide association study on age-related macular degeneration and identify two novel genetic variants that are significantly associated with the disease. The p-based method may set a stage for a new paradigm of statistical tests.
NASA Technical Reports Server (NTRS)
Naesset, Erik; Gobakken, Terje; Bollandsas, Ole Martin; Gregoire, Timothy G.; Nelson, Ross; Stahl, Goeran
2013-01-01
Airborne scanning LiDAR (Light Detection and Ranging) has emerged as a promising tool to provide auxiliary data for sample surveys aiming at estimation of above-ground tree biomass (AGB), with potential applications in REDD forest monitoring. For larger geographical regions such as counties, states or nations, it is not feasible to collect airborne LiDAR data continuously ("wall-to-wall") over the entire area of interest. Two-stage cluster survey designs have therefore been demonstrated by which LiDAR data are collected along selected individual flight-lines treated as clusters and with ground plots sampled along these LiDAR swaths. Recently, analytical AGB estimators and associated variance estimators that quantify the sampling variability have been proposed. Empirical studies employing these estimators have shown a seemingly equal or even larger uncertainty of the AGB estimates obtained with extensive use of LiDAR data to support the estimation as compared to pure field-based estimates employing estimators appropriate under simple random sampling (SRS). However, comparison of uncertainty estimates under SRS and sophisticated two-stage designs is complicated by large differences in the designs and assumptions. In this study, probability-based principles to estimation and inference were followed. We assumed designs of a field sample and a LiDAR-assisted survey of Hedmark County (HC) (27,390 km2), Norway, considered to be more comparable than those assumed in previous studies. The field sample consisted of 659 systematically distributed National Forest Inventory (NFI) plots and the airborne scanning LiDAR data were collected along 53 parallel flight-lines flown over the NFI plots. We compared AGB estimates based on the field survey only assuming SRS against corresponding estimates assuming two-phase (double) sampling with LiDAR and employing model-assisted estimators. We also compared AGB estimates based on the field survey only assuming two-stage sampling (the NFI plots being grouped in clusters) against corresponding estimates assuming two-stage sampling with the LiDAR and employing model-assisted estimators. For each of the two comparisons, the standard errors of the AGB estimates were consistently lower for the LiDAR-assisted designs. The overall reduction of the standard errors in the LiDAR-assisted estimation was around 40-60% compared to the pure field survey. We conclude that the previously proposed two-stage model-assisted estimators are inappropriate for surveys with unequal lengths of the LiDAR flight-lines and new estimators are needed. Some options for design of LiDAR-assisted sample surveys under REDD are also discussed, which capitalize on the flexibility offered when the field survey is designed as an integrated part of the overall survey design as opposed to previous LiDAR-assisted sample surveys in the boreal and temperate zones which have been restricted by the current design of an existing NFI.
Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.
Li, Xianwei; Gao, Huijun
2015-10-01
Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.
NASA Astrophysics Data System (ADS)
Pankhurst, M. J.; Fowler, R.; Courtois, L.; Nonni, S.; Zuddas, F.; Atwood, R. C.; Davis, G. R.; Lee, P. D.
2018-01-01
We present new software allowing significantly improved quantitative mapping of the three-dimensional density distribution of objects using laboratory source polychromatic X-rays via a beam characterisation approach (c.f. filtering or comparison to phantoms). One key advantage is that a precise representation of the specimen material is not required. The method exploits well-established, widely available, non-destructive and increasingly accessible laboratory-source X-ray tomography. Beam characterisation is performed in two stages: (1) projection data are collected through a range of known materials utilising a novel hardware design integrated into the rotation stage; and (2) a Python code optimises a spectral response model of the system. We provide hardware designs for use with a rotation stage able to be tilted, yet the concept is easily adaptable to virtually any laboratory system and sample, and implicitly corrects the image artefact known as beam hardening.
Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood
Bondell, Howard D.; Stefanski, Leonard A.
2013-01-01
Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805
Preconcentrator with high volume chiller for high vapor pressure particle detection
Linker, Kevin L
2013-10-22
Apparatus and method for collecting particles of both high and low vapor pressure target materials entrained in a large volume sample gas stream. Large volume active cooling provides a cold air supply which is mixed with the sample gas stream to reduce the vapor pressure of the particles. In embodiments, a chiller cools air from ambient conditions to 0-15.degree. C. with the volumetric flow rate of the cold air supply being at least equal to the volumetric flow rate of the sample gas stream. In further embodiments an adsorption media is heated in at least two stages, a first of which is below a threshold temperature at which decomposition products of the high vapor pressure particle are generated.
Raina, Sunil Kumar; Mengi, Vijay; Singh, Gurdeep
2012-07-01
Breast feeding is universally and traditionally practicised in India. Experts advocate breast feeding as the best method of feeding young infants. To assess the role of various factors in determining colostrum feeding in block R. S. Pura of district Jammu. A stratified two-stage design with villages as the primary sampling unit and lactating mothers as secondary sampling unit. Villages were divided into different clusters on the basis of population and sampling units were selected by a simple random technique. Breastfeeding is almost universal in R. S. Pura. Differentials in discarding the first milk were not found to be important among various socioeconomic groups and the phenomenon appeared more general than specific.
A Modified Sparse Representation Method for Facial Expression Recognition.
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.
A Modified Sparse Representation Method for Facial Expression Recognition
Wang, Wei; Xu, LiHong
2016-01-01
In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR) method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD) method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit) method is used to speed up the convergence of OMP (orthogonal matching pursuit). Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan's JAFFE and CMU's CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result. PMID:26880878
Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.
Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J
2015-06-15
Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance component = 6.2), rather than due to pasture (variance component = 0.55) or season (variance component = 0.15). Using the observed distribution of L3, the required sample size (i.e. number of plots per pasture) for sampling a pasture through random plots with a particular precision was simulated. A higher relative precision was acquired when estimating PLC on pastures with a high larval contamination and a low level of aggregation compared to pastures with a low larval contamination when the same sample size was applied. In the future, herbage sampling through random plots across pasture (method 2) seems a promising method to develop further as no significant difference in counts between the methods was found and this method was less time consuming. Copyright © 2015 Elsevier B.V. All rights reserved.
Three-dimensional thermographic imaging using a virtual wave concept
NASA Astrophysics Data System (ADS)
Burgholzer, Peter; Thor, Michael; Gruber, Jürgen; Mayr, Günther
2017-03-01
In this work, it is shown that image reconstruction methods from ultrasonic imaging can be employed for thermographic signals. Before using these imaging methods, a virtual signal is calculated by applying a local transformation to the temperature evolution measured on a sample surface. The introduced transformation describes all the irreversibility of the heat diffusion process and can be used for every sample shape. To date, one-dimensional methods have been primarily used in thermographic imaging. The proposed two-stage algorithm enables reconstruction in two and three dimensions. The feasibility of this approach is demonstrated through simulations and experiments. For the latter, small steel beads embedded in an epoxy resin are imaged. The resolution limit is found to be proportional to the depth of the structures and to be inversely proportional to the logarithm of the signal-to-noise ratio. Limited-view artefacts can arise if the measurement is performed on a single planar detection surface. These artifacts can be reduced by measuring the thermographic signals from multiple planes, which is demonstrated by numerical simulations and by experiments performed on an epoxy cube.
Gukas, Isaac D.; Girling, Anne C.; Mandong, Barnabas. M.; Prime, Wendy; Jennings, Barbara A.; Leinster, Samuel J.
2008-01-01
Background Some studies have suggested that breast cancer in black women is more aggressive than in white women. This study’s aim was to look for evidence of differences in tumour biology between the two cohorts. Methods This study compared the stage, grade and pathological expression of five immunohistochemical markers (oestrogen receptor [ER], progesterone receptor [PR], ERBB2, P53 and cyclin D1 [CCND1]) in tumour biopsies from age-matched cohorts of patients from Nigeria and England. Sixty-eight suitable samples from Nigerian (n = 34) and British (n = 34) breast cancer patients were retrieved from histology tissue banks. Results There were significant differences between the two cohorts in the expression of ER and CCND1; and stark differences in the clinical stage at presentation. But no significant differences were observed for tumour grade. Conclusion There was a significantly, low ER expression in the Nigerian cases which also predicts a poor response to hormonal therapy as well as a poorer prognosis. Differences in clinical stage at presentation will most likely influence prognosis between Nigerian and British women with breast cancer. PMID:21892296
Konecsni, Kelly; Scheller, Cheryl; Scandrett, Brad; Buholzer, Patrik; Gajadhar, Alvin
2017-08-30
The artificial digestion magnetic stirrer method using pepsin protease and hydrochloric acid is the standard assay for the detection of Trichinella larvae in muscle of infected animals. Recently, an alternative enzyme, serine protease, was employed in the development of a commercially available digestion kit (PrioCHECK™ Trichinella AAD Kit). This assay requires a higher digestion temperature of 60°C which kills the larvae during the digestion process, mitigating the risk of environmental contamination from the parasite. The present study was conducted to determine the performance of the PrioCHECK™ Trichinella AAD Kit compared to the conventional pepsin/HCl digestion. Replicate paired 115g samples of Trichinella-negative pork diaphragm and masseter, and of horse tongue and masseter, were used to compare the two methods for tissue digestibility. Similarly, paired 100g samples of pork diaphragm and horse tongue were spiked with proficiency samples containing known numbers of Trichinella spiralis first stage larvae to compare larval recoveries for the two methods. Masseter samples from wild bears and wolves naturally infected with Trichinella nativa or T6 were also used to compare the performance of the methods. The results of the study showed that the PrioCHECK™ Trichinella AAD Kit, when used according to the manufacturer's instructions, was effective in detecting Trichinella infection in all samples that contained 0.05 or more larvae per gram of tissue. Although there was no significant difference between the Kit method and the standard pepsin/HCl digestion procedure in the average number of larvae recovered from spiked pork diaphragm, 38% fewer larvae were recovered from similarly spiked samples of horse tongue by digestion using serine protease (one way ANOVA, P value <0.001). Additional clarification was also more often required for both horse meat and pork when using the Kit compared to the pepsin/HCl method. The results of testing wildlife samples were similar for the two methods. Overall, the performance of the Kit method was suitable for the digestion of muscle samples and recovery of Trichinella larvae, according to international standards. It also provides advantages of faster digestion, safer reagents and recovered parasites that are non-hazardous for analysts and the environment. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Field Analysis of Microbial Contamination Using Three Molecular Methods in Parallel
NASA Technical Reports Server (NTRS)
Morris, H.; Stimpson, E.; Schenk, A.; Kish, A.; Damon, M.; Monaco, L.; Wainwright, N.; Steele, A.
2010-01-01
Advanced technologies with the capability of detecting microbial contamination remain an integral tool for the next stage of space agency proposed exploration missions. To maintain a clean, operational spacecraft environment with minimal potential for forward contamination, such technology is a necessity, particularly, the ability to analyze samples near the point of collection and in real-time both for conducting biological scientific experiments and for performing routine monitoring operations. Multiple molecular methods for detecting microbial contamination are available, but many are either too large or not validated for use on spacecraft. Two methods, the adenosine- triphosphate (ATP) and Limulus Amebocyte Lysate (LAL) assays have been approved by the NASA Planetary Protection Office for the assessment of microbial contamination on spacecraft surfaces. We present the first parallel field analysis of microbial contamination pre- and post-cleaning using these two methods as well as universal primer-based polymerase chain reaction (PCR).
Alum, Absar; Rock, Channah; Abbaszadegan, Morteza
2014-01-01
For land application, biosolids are classified as Class A or Class B based on the levels of bacterial, viral, and helminths pathogens in residual biosolids. The current EPA methods for the detection of these groups of pathogens in biosolids include discrete steps. Therefore, a separate sample is processed independently to quantify the number of each group of the pathogens in biosolids. The aim of the study was to develop a unified method for simultaneous processing of a single biosolids sample to recover bacterial, viral, and helminths pathogens. At the first stage for developing a simultaneous method, nine eluents were compared for their efficiency to recover viruses from a 100 gm spiked biosolids sample. In the second stage, the three top performing eluents were thoroughly evaluated for the recovery of bacteria, viruses, and helminthes. For all three groups of pathogens, the glycine-based eluent provided higher recovery than the beef extract-based eluent. Additional experiments were performed to optimize performance of glycine-based eluent under various procedural factors such as, solids to eluent ratio, stir time, and centrifugation conditions. Last, the new method was directly compared with the EPA methods for the recovery of the three groups of pathogens spiked in duplicate samples of biosolids collected from different sources. For viruses, the new method yielded up to 10% higher recoveries than the EPA method. For bacteria and helminths, recoveries were 74% and 83% by the new method compared to 34% and 68% by the EPA method, respectively. The unified sample processing method significantly reduces the time required for processing biosolids samples for different groups of pathogens; it is less impacted by the intrinsic variability of samples, while providing higher yields (P = 0.05) and greater consistency than the current EPA methods.
Chung, Kyung Hoon; Lo, Lun-Jou
2018-05-01
Both one- and two-stage approaches have been widely used for patients with asymmetric bilateral cleft lip. There are insufficient long-term outcome data for comparison of these two methods. The purpose of this retrospective study was to compare the clinical outcome over the past 20 years. The senior author's (L.J.L.) database was searched for patients with asymmetric bilateral cleft lip from 1995 to 2015. Qualified patients were divided into two groups: one-stage and two-stage. The postoperative photographs of patients were evaluated subjectively by surgical professionals and laypersons. Ratios of the nasolabial region were calculated for objective analysis. Finally, the revision procedures in the nasolabial area were reviewed. Statistical analyses were performed. A total of 95 consecutive patients were qualified for evaluation. Average follow-up was 13.1 years. A two-stage method was used in 35 percent of the patients, and a one-stage approach was used in 65 percent. All underwent primary nasal reconstruction. Among the satisfaction rating scores, the one-stage repair was rated significantly higher than two-stage reconstruction (p = 0.0001). Long-term outcomes of the two-stage patients and the unrepaired mini-microform deformities were unsatisfactory according to both professional and nonprofessional evaluators. The revision rate was higher in patients with a greater-side complete cleft lip and palate as compared with those without palatal involvement. The results suggested that one-stage repair provided better results with regard to achieving a more symmetric and smooth lip and nose after primary reconstruction. The revision rate was slightly higher in the two-stage patient group. Therapeutic, III.
Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui
2015-10-30
Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.
Assessing Reliability of Medical Record Reviews for the Detection of Hospital Adverse Events.
Ock, Minsu; Lee, Sang-il; Jo, Min-Woo; Lee, Jin Yong; Kim, Seon-Ha
2015-09-01
The purpose of this study was to assess the inter-rater reliability and intra-rater reliability of medical record review for the detection of hospital adverse events. We conducted two stages retrospective medical records review of a random sample of 96 patients from one acute-care general hospital. The first stage was an explicit patient record review by two nurses to detect the presence of 41 screening criteria (SC). The second stage was an implicit structured review by two physicians to identify the occurrence of adverse events from the positive cases on the SC. The inter-rater reliability of two nurses and that of two physicians were assessed. The intra-rater reliability was also evaluated by using test-retest method at approximately two weeks later. In 84.2% of the patient medical records, the nurses agreed as to the necessity for the second stage review (kappa, 0.68; 95% confidence interval [CI], 0.54 to 0.83). In 93.0% of the patient medical records screened by nurses, the physicians agreed about the absence or presence of adverse events (kappa, 0.71; 95% CI, 0.44 to 0.97). When assessing intra-rater reliability, the kappa indices of two nurses were 0.54 (95% CI, 0.31 to 0.77) and 0.67 (95% CI, 0.47 to 0.87), whereas those of two physicians were 0.87 (95% CI, 0.62 to 1.00) and 0.37 (95% CI, -0.16 to 0.89). In this study, the medical record review for detecting adverse events showed intermediate to good level of inter-rater and intra-rater reliability. Well organized training program for reviewers and clearly defining SC are required to get more reliable results in the hospital adverse event study.
NASA Technical Reports Server (NTRS)
Kim, B. F.; Moorjani, K.; Phillips, T. E.; Adrian, F. J.; Bohandy, J.; Dolecek, Q. E.
1993-01-01
A method for characterization of granular superconducting thin films has been developed which encompasses both the morphological state of the sample and its fabrication process parameters. The broad scope of this technique is due to the synergism between experimental measurements and their interpretation using numerical simulation. Two novel technologies form the substance of this system: the magnetically modulated resistance method for characterizing superconductors; and a powerful new computer peripheral, the Parallel Information Processor card, which provides enhanced computing capability for PC computers. This enhancement allows PC computers to operate at speeds approaching that of supercomputers. This makes atomic scale simulations possible on low cost machines. The present development of this system involves the integration of these two technologies using mesoscale simulations of thin film growth. A future stage of development will incorporate atomic scale modeling.
Methodological proposal for the remediation of a site affected by phosphogypsum deposits
NASA Astrophysics Data System (ADS)
Martínez-Sanchez, M. J.; Perez-Sirvent, C.; Bolivar, J. P.; Garcia-Tenorio, R.
2012-04-01
The accumulation of phosphogysum (PY) produces a well known environmental problems. The proposals for the remediation of these sites require multidisciplinary and very specific studies. Since they cover large areas a sampling design specifically outlined for each case is necessary in order the contaminants, transfer pathways and particular processes can be correctly identified. In addition to a suitable sampling of the soil, aquatic medium and biota, appropriate studies of the space-temporal variations by means of control samples are required. Two different stages should be considered: 1.- Diagnostic stage This stage includes preliminary studies, identification of possible sources of radiosotopes, design of the appropriate sampling plan, hydrogeological study, characterization and study of the space-temporal variability of radioisotopes and other contaminants, as well as the risk assessement for health and ecosystems, that depends on the future use of the site. 2.- Remediation proposal stage It comprises the evaluation and comparison of the different procedures for the decontamination/remediation, including models experiments at the laboratory. To this respect, the preparation and detailed study of a small scale pilot project is a task of particular relevance. In this way the suitability of the remediating technology can be checked, and its performance optimized. These two stages allow a technically well-founded proposal to be presented to the Organisms or Institutions in charge of the problem and facilitate decision-making. It both stages be included in a social communication campaign in order the final proposal be accepted by stakeholders.
Extending Vulnerability Assessment to Include Life Stages Considerations
Hodgson, Emma E.; Essington, Timothy E.; Kaplan, Isaac C.
2016-01-01
Species are experiencing a suite of novel stressors from anthropogenic activities that have impacts at multiple scales. Vulnerability assessment is one tool to evaluate the likely impacts that these stressors pose to species so that high-vulnerability cases can be identified and prioritized for monitoring, protection, or mitigation. Commonly used semi-quantitative methods lack a framework to explicitly account for differences in exposure to stressors and organism responses across life stages. Here we propose a modification to commonly used spatial vulnerability assessment methods that includes such an approach, using ocean acidification in the California Current as an illustrative case study. Life stage considerations were included by assessing vulnerability of each life stage to ocean acidification and were used to estimate population vulnerability in two ways. We set population vulnerability equal to: (1) the maximum stage vulnerability and (2) a weighted mean across all stages, with weights calculated using Lefkovitch matrix models. Vulnerability was found to vary across life stages for the six species explored in this case study: two krill–Euphausia pacifica and Thysanoessa spinifera, pteropod–Limacina helicina, pink shrimp–Pandalus jordani, Dungeness crab–Metacarcinus magister and Pacific hake–Merluccius productus. The maximum vulnerability estimates ranged from larval to subadult and adult stages with no consistent stage having maximum vulnerability across species. Similarly, integrated vulnerability metrics varied greatly across species. A comparison showed that some species had vulnerabilities that were similar between the two metrics, while other species’ vulnerabilities varied substantially between the two metrics. These differences primarily resulted from cases where the most vulnerable stage had a low relative weight. We compare these methods and explore circumstances where each method may be appropriate. PMID:27416031
Extending Vulnerability Assessment to Include Life Stages Considerations.
Hodgson, Emma E; Essington, Timothy E; Kaplan, Isaac C
2016-01-01
Species are experiencing a suite of novel stressors from anthropogenic activities that have impacts at multiple scales. Vulnerability assessment is one tool to evaluate the likely impacts that these stressors pose to species so that high-vulnerability cases can be identified and prioritized for monitoring, protection, or mitigation. Commonly used semi-quantitative methods lack a framework to explicitly account for differences in exposure to stressors and organism responses across life stages. Here we propose a modification to commonly used spatial vulnerability assessment methods that includes such an approach, using ocean acidification in the California Current as an illustrative case study. Life stage considerations were included by assessing vulnerability of each life stage to ocean acidification and were used to estimate population vulnerability in two ways. We set population vulnerability equal to: (1) the maximum stage vulnerability and (2) a weighted mean across all stages, with weights calculated using Lefkovitch matrix models. Vulnerability was found to vary across life stages for the six species explored in this case study: two krill-Euphausia pacifica and Thysanoessa spinifera, pteropod-Limacina helicina, pink shrimp-Pandalus jordani, Dungeness crab-Metacarcinus magister and Pacific hake-Merluccius productus. The maximum vulnerability estimates ranged from larval to subadult and adult stages with no consistent stage having maximum vulnerability across species. Similarly, integrated vulnerability metrics varied greatly across species. A comparison showed that some species had vulnerabilities that were similar between the two metrics, while other species' vulnerabilities varied substantially between the two metrics. These differences primarily resulted from cases where the most vulnerable stage had a low relative weight. We compare these methods and explore circumstances where each method may be appropriate.
A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application
NASA Astrophysics Data System (ADS)
Roy, Sounak; Banerjee, Swapna
2018-03-01
This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.
A 9-Bit 50 MSPS Quadrature Parallel Pipeline ADC for Communication Receiver Application
NASA Astrophysics Data System (ADS)
Roy, Sounak; Banerjee, Swapna
2018-06-01
This paper presents the design and implementation of a pipeline Analog-to-Digital Converter (ADC) for superheterodyne receiver application. Several enhancement techniques have been applied in implementing the ADC, in order to relax the target specifications of its building blocks. The concepts of time interleaving and double sampling have been used simultaneously to enhance the sampling speed and to reduce the number of amplifiers used in the ADC. Removal of a front end sample-and-hold amplifier is possible by employing dynamic comparators with switched capacitor based comparison of input signal and reference voltage. Each module of the ADC comprises two 2.5-bit stages followed by two 1.5-bit stages and a 3-bit flash stage. Four such pipeline ADC modules are time interleaved using two pairs of non-overlapping clock signals. These two pairs of clock signals are in phase quadrature with each other. Hence the term quadrature parallel pipeline ADC has been used. These configurations ensure that the entire ADC contains only eight operational-trans-conductance amplifiers. The ADC is implemented in a 0.18-μm CMOS process and supply voltage of 1.8 V. The proto-type is tested at sampling frequencies of 50 and 75 MSPS producing an Effective Number of Bits (ENOB) of 6.86- and 6.11-bits respectively. At peak sampling speed, the core ADC consumes only 65 mW of power.
Determining skeletal maturation using insulin-like growth factor I (IGF-I) test.
Gupta, Shreya; Jain, Sandhya; Gupta, Puneet; Deoskar, Anuradha
2012-11-01
To investigate the validity of Insulin like Growth Factor -1(IGF-1) as a skeletal maturity indicator by comparing serum IGF-1 levels with the stages in cervical vertebral maturation (CVM) and in the middle phalanx of the third finger (MP3). The study population was selected by using simple random sampling technique and consisted of 30 female subjects in the age range of 8-23 years who had blood sample, cephalometric and MP3 radiographs taken on the same day. Serum IGF-I estimation was carried out on the blood samples using chemiluminescence immunoassay (CLIA) method. CVM was evaluated using method by Baccetti et al and MP3 staging was done using Rajagopal & Kansal method. Mean IGF-1 level between the stages was compared by Kruskal-Wallis and Mann Whitney test. Serum IGF-1 levels in females correlate well with skeletal maturity determined by CVM and MP3 stages and increase sharply during early pubertal stages followed by a decrease in late puberty. In addition we hypothesis that serum IGF-1 testing can be undertaken as a preliminary screening test in patients in whom the orthodontist predicts the possibility of using myofunctional appliance but in whom the chronologic age is not suggestive for a growth modification therapy. The finding of the study highlights the fact that the serum IGF-1 estimation can be a valuable tool in assessing skeletal maturation. Copyright © 2012 Società Italiana di Ortodonzia SIDO. Published by Elsevier Srl. All rights reserved.
Chen, Jing; Hu, Bin; Wang, Yue; Moore, Philip; Dai, Yongqiang; Feng, Lei; Ding, Zhijie
2017-12-20
Collaboration between humans and computers has become pervasive and ubiquitous, however current computer systems are limited in that they fail to address the emotional component. An accurate understanding of human emotions is necessary for these computers to trigger proper feedback. Among multiple emotional channels, physiological signals are synchronous with emotional responses; therefore, analyzing physiological changes is a recognized way to estimate human emotions. In this paper, a three-stage decision method is proposed to recognize four emotions based on physiological signals in the multi-subject context. Emotion detection is achieved by using a stage-divided strategy in which each stage deals with a fine-grained goal. The decision method consists of three stages. During the training process, the initial stage transforms mixed training subjects to separate groups, thus eliminating the effect of individual differences. The second stage categorizes four emotions into two emotion pools in order to reduce recognition complexity. The third stage trains a classifier based on emotions in each emotion pool. During the testing process, a test case or test trial will be initially classified to a group followed by classification into an emotion pool in the second stage. An emotion will be assigned to the test trial in the final stage. In this paper we consider two different ways of allocating four emotions into two emotion pools. A comparative analysis is also carried out between the proposal and other methods. An average recognition accuracy of 77.57% was achieved on the recognition of four emotions with the best accuracy of 86.67% to recognize the positive and excited emotion. Using differing ways of allocating four emotions into two emotion pools, we found there is a difference in the effectiveness of a classifier on learning each emotion. When compared to other methods, the proposed method demonstrates a significant improvement in recognizing four emotions in the multi-subject context. The proposed three-stage decision method solves a crucial issue which is 'individual differences' in multi-subject emotion recognition and overcomes the suboptimal performance with respect to direct classification of multiple emotions. Our study supports the observation that the proposed method represents a promising methodology for recognizing multiple emotions in the multi-subject context.
The in vitro effect of Ferula asafoetida and Allium sativum extracts on Strongylus spp.
Tavassoli, Mousa; Jalilzadeh-Amin, Ghader; Fard, Vahid R. Besharati; Esfandiarpour, Rahim
2018-01-01
The high incidence of equine gastrointestinal worms and their increased resistance against anthelmintics has encouraged research into the effectiveness of rational phytotherapy. This study investigates the in vitro anti-parasitic effects of extracts of Ferula asafoetida and Allium sativum, two native plants that are widespread in Iran on Strongylus spp. larvae. Faecal samples were collected from horses, examined by routine parasitology methods and positive samples were used for future examination. After incubation, the third-stage larvae were harvested by the Baermann technique. A hydroalcoholic extract from the plants was used for the antiparasitic study, while tap water was used for controls. Trials for each concentration and control group were performed in three replicates. The results showed that that during the first day of exposure, the hydroalcoholic extract of F. asafoetida at concentration of 10, 50 and 100 mg/ml killed over the 90% of the larvae, and A. sativum extract at concentration of 50 and 100 mg/ml killed over the 95% of larvae (p<0.05). The results obtained from the bioassay showed that two plant extracts have a larvicidal effect on the Strongylus spp. larval stages compared with the control group.
Technology Development Risk Assessment for Space Transportation Systems
NASA Technical Reports Server (NTRS)
Mathias, Donovan L.; Godsell, Aga M.; Go, Susie
2006-01-01
A new approach for assessing development risk associated with technology development projects is presented. The method represents technology evolution in terms of sector-specific discrete development stages. A Monte Carlo simulation is used to generate development probability distributions based on statistical models of the discrete transitions. Development risk is derived from the resulting probability distributions and specific program requirements. Two sample cases are discussed to illustrate the approach, a single rocket engine development and a three-technology space transportation portfolio.
Comparison of water-quality samples collected by siphon samplers and automatic samplers in Wisconsin
Graczyk, David J.; Robertson, Dale M.; Rose, William J.; Steur, Jeffrey J.
2000-01-01
In small streams, flow and water-quality concentrations often change quickly in response to meteorological events. Hydrologists, field technicians, or locally hired stream ob- servers involved in water-data collection are often unable to reach streams quickly enough to observe or measure these rapid changes. Therefore, in hydrologic studies designed to describe changes in water quality, a combination of manual and automated sampling methods have commonly been used manual methods when flow is relatively stable and automated methods when flow is rapidly changing. Auto- mated sampling, which makes use of equipment programmed to collect samples in response to changes in stage and flow of a stream, has been shown to be an effective method of sampling to describe the rapid changes in water quality (Graczyk and others, 1993). Because of the high cost of automated sampling, however, especially for studies examining a large number of sites, alternative methods have been considered for collecting samples during rapidly changing stream conditions. One such method employs the siphon sampler (fig. 1). also referred to as the "single-stage sampler." Siphon samplers are inexpensive to build (about $25- $50 per sampler), operate, and maintain, so they are cost effective to use at a large number of sites. Their ability to collect samples representing the average quality of water passing though the entire cross section of a stream, however, has not been fully demonstrated for many types of stream sites.
Shirai, Hiroki; Ikeda, Kazuyoshi; Yamashita, Kazuo; Tsuchiya, Yuko; Sarmiento, Jamica; Liang, Shide; Morokata, Tatsuaki; Mizuguchi, Kenji; Higo, Junichi; Standley, Daron M; Nakamura, Haruki
2014-08-01
In the second antibody modeling assessment, we used a semiautomated template-based structure modeling approach for 11 blinded antibody variable region (Fv) targets. The structural modeling method involved several steps, including template selection for framework and canonical structures of complementary determining regions (CDRs), homology modeling, energy minimization, and expert inspection. The submitted models for Fv modeling in Stage 1 had the lowest average backbone root mean square deviation (RMSD) (1.06 Å). Comparison to crystal structures showed the most accurate Fv models were generated for 4 out of 11 targets. We found that the successful modeling in Stage 1 mainly was due to expert-guided template selection for CDRs, especially for CDR-H3, based on our previously proposed empirical method (H3-rules) and the use of position specific scoring matrix-based scoring. Loop refinement using fragment assembly and multicanonical molecular dynamics (McMD) was applied to CDR-H3 loop modeling in Stage 2. Fragment assembly and McMD produced putative structural ensembles with low free energy values that were scored based on the OSCAR all-atom force field and conformation density in principal component analysis space, respectively, as well as the degree of consensus between the two sampling methods. The quality of 8 out of 10 targets improved as compared with Stage 1. For 4 out of 10 Stage-2 targets, our method generated top-scoring models with RMSD values of less than 1 Å. In this article, we discuss the strengths and weaknesses of our approach as well as possible directions for improvement to generate better predictions in the future. © 2014 Wiley Periodicals, Inc.
Lightdrum—Portable Light Stage for Accurate BTF Measurement on Site
Havran, Vlastimil; Hošek, Jan; Němcová, Šárka; Čáp, Jiří; Bittner, Jiří
2017-01-01
We propose a miniaturised light stage for measuring the bidirectional reflectance distribution function (BRDF) and the bidirectional texture function (BTF) of surfaces on site in real world application scenarios. The main principle of our lightweight BTF acquisition gantry is a compact hemispherical skeleton with cameras along the meridian and with light emitting diode (LED) modules shining light onto a sample surface. The proposed device is portable and achieves a high speed of measurement while maintaining high degree of accuracy. While the positions of the LEDs are fixed on the hemisphere, the cameras allow us to cover the range of the zenith angle from 0∘ to 75∘ and by rotating the cameras along the axis of the hemisphere we can cover all possible camera directions. This allows us to take measurements with almost the same quality as existing stationary BTF gantries. Two degrees of freedom can be set arbitrarily for measurements and the other two degrees of freedom are fixed, which provides a tradeoff between accuracy of measurements and practical applicability. Assuming that a measured sample is locally flat and spatially accessible, we can set the correct perpendicular direction against the measured sample by means of an auto-collimator prior to measuring. Further, we have designed and used a marker sticker method to allow for the easy rectification and alignment of acquired images during data processing. We show the results of our approach by images rendered for 36 measured material samples. PMID:28241466
NASA Technical Reports Server (NTRS)
Johnson, J. R. (Principal Investigator)
1974-01-01
The author has identified the following significant results. The broad scale vegetation classification was developed for a 3,200 sq mile area in southeastern Arizona. The 31 vegetation types were derived from association tables which contained information taken at about 500 ground sites. The classification provided an information base that was suitable for use with small scale photography. A procedure was developed and tested for objectively comparing photo images. The procedure consisted of two parts, image groupability testing and image complexity testing. The Apollo and ERTS photos were compared for relative suitability as first stage stratification bases in two stage proportional probability sampling. High altitude photography was used in common at the second stage.
Identifying High-Rate Flows Based on Sequential Sampling
NASA Astrophysics Data System (ADS)
Zhang, Yu; Fang, Binxing; Luo, Hao
We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.
Batistatou, Evridiki; McNamee, Roseanne
2012-12-10
It is known that measurement error leads to bias in assessing exposure effects, which can however, be corrected if independent replicates are available. For expensive replicates, two-stage (2S) studies that produce data 'missing by design', may be preferred over a single-stage (1S) study, because in the second stage, measurement of replicates is restricted to a sample of first-stage subjects. Motivated by an occupational study on the acute effect of carbon black exposure on respiratory morbidity, we compare the performance of several bias-correction methods for both designs in a simulation study: an instrumental variable method (EVROS IV) based on grouping strategies, which had been recommended especially when measurement error is large, the regression calibration and the simulation extrapolation methods. For the 2S design, either the problem of 'missing' data was ignored or the 'missing' data were imputed using multiple imputations. Both in 1S and 2S designs, in the case of small or moderate measurement error, regression calibration was shown to be the preferred approach in terms of root mean square error. For 2S designs, regression calibration as implemented by Stata software is not recommended in contrast to our implementation of this method; the 'problematic' implementation of regression calibration although substantially improved with use of multiple imputations. The EVROS IV method, under a good/fairly good grouping, outperforms the regression calibration approach in both design scenarios when exposure mismeasurement is severe. Both in 1S and 2S designs with moderate or large measurement error, simulation extrapolation severely failed to correct for bias. Copyright © 2012 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Miyamoto, S.; Nakayama, K.
1983-01-01
A method of two-stage clustering of literature based on citation frequency is applied to 5,065 articles from 57 journals in environmental and civil engineering. Results of related methods of citation analysis (hierarchical graph, clustering of journals, multidimensional scaling) applied to same set of articles are compared. Ten references are…
Corrosion in Magnesium and a Magnesium Alloy
NASA Astrophysics Data System (ADS)
Akavipat, Sanay
Magnesium and a magnesium alloy (AZ91C) have been ion implanted over a range of ions energies (50 to 150 keV) and doses (1 x 10('16) to 2 x 10('17) ions/cm('2)) to modify the corrosion properties of the metals. The corrosion tests were done by anodic polarization in chloride -free and chloride-containing aqueous solutions of a borated -boric acid with a pH of 9.3. Anodic polarization measurements showed that some implantations could greatly reduce the corrosion current densities at all impressed voltages and also increased slightly the pitting potential, which indicated the onset of the chloride attack. These improvements in corrosion resistance were caused by boron implantations into both types of samples. However, iron implantations were found to improve only the magnesium alloy. To study the corrosion in more detail, Scanning Auger Microprobe Spectrometer (SAM), Scanning Electron Microscope (SEM) with an X-ray Energy Spectrometry (XES) attachment, and Transmission Electron Microscope (TEM) measurements were used to analyze samples before, after, and at various corrosion stages. In both the unimplanted pure magnesium and AZ91C samples, anodic polarization results revealed that there were three active corrosion stages (Stages A, C, and E) and two passivating stages (Stages B and D). Examination of Stages A and B in both types of samples showed that only a mild, generalized corrosion had occurred. In Stage C of the TD samples, a pitting breakdown in the initial oxide film was observed. In Stage C of the AZ91C samples, galvanic and intergranular attack around the Mg(,17)Al(,12) intermetallic islands and along the matrix grain boundaries was observed. Stage D of both samples showed the formation of a thick, passivating oxygen containing, probably Mg(OH)(,2) film. In Stage E, this film was broken down by pits, which formed due to the presence of the chloride ions in both types of samples. Stages A through D of the unimplanted samples were not seen in the boron or iron implanted samples. Instead one low current density passivating stage was formed, which was ultimately broken down by the chloride attack. It is believed that the implantation of boron modified the initial surface film to inhibit corrosion, whereas the iron implantation modified the intermetallic (Mg(,17)Al(,12)) islands to act as sacrificial anodes.
Preparations of PbSe quantum dots in silicate glasses by a melt-annealing technique
NASA Astrophysics Data System (ADS)
Ma, D. W.; Cheng, C.; Zhang, Y. N.; Xu, Z. S.
2014-11-01
Silicate glass containing PbSe quantum dots (QDs) has important prospective applications in near infra-red optoelectronic devices. In this study, single-stage and double-stage heat-treatment methods were used respectively to prepare PbSe QDs in silicate glasses. Investigation results show that the double-stage heat-treatment is a favorable method to synthesize PbSe QDs with strong photoluminescence (PL) intensity and narrow full weight at half maximum (FWHM) in PL peak. Therefore, the method to prepare PbSe QDs was emphasized on the double-stage heat-treatment. Transmission electron microscopy measurements show that the standard deviations of the average QD sizes from the samples heat-treated at the development temperature of 550 °C fluctuate slightly in the range of 0.6-0.8 nm, while this deviation increases up to 1.2 nm for the sample with the development temperature of 600 °C. In addition, the linear relationship between the QD size and holding time indicates that the crystallization behavior of PbSe QDs in silicate glasses is interface-controlled growth in early stage of crystallization. The growth rates of PbSe QDs are determined to be 0.24 nm/h at 550 °C and 0.72 nm/h at 600 °C. In short, the double-stage heat-treatment at 450 °C for 20 h followed by heat-treatment at 550 °C for 5 h is a preferred process for the crystallization of PbSe QDs in silicate glass. Through this treatment, PbSe QDs with a narrow size dispersion of 5.0 ± 0.6 nm can be obtained, the PL peak from this sample is highest in intensity and narrowest in FWHM among all samples, and the peak is centered on 1575 nm, very close to the most common wavelength of 1550 nm in fiber-optic communication systems.
Lippolis, Vincenzo; Ferrara, Massimo; Cervellieri, Salvatore; Damascelli, Anna; Epifani, Filomena; Pascale, Michelangelo; Perrone, Giancarlo
2016-02-02
The availability of rapid diagnostic methods for monitoring ochratoxigenic species during the seasoning processes for dry-cured meats is crucial and constitutes a key stage in order to prevent the risk of ochratoxin A (OTA) contamination. A rapid, easy-to-perform and non-invasive method using an electronic nose (e-nose) based on metal oxide semiconductors (MOS) was developed to discriminate dry-cured meat samples in two classes based on the fungal contamination: class P (samples contaminated by OTA-producing Penicillium strains) and class NP (samples contaminated by OTA non-producing Penicillium strains). Two OTA-producing strains of Penicillium nordicum and two OTA non-producing strains of Penicillium nalgiovense and Penicillium salamii, were tested. The feasibility of this approach was initially evaluated by e-nose analysis of 480 samples of both Yeast extract sucrose (YES) and meat-based agar media inoculated with the tested Penicillium strains and incubated up to 14 days. The high recognition percentages (higher than 82%) obtained by Discriminant Function Analysis (DFA), either in calibration and cross-validation (leave-more-out approach), for both YES and meat-based samples demonstrated the validity of the used approach. The e-nose method was subsequently developed and validated for the analysis of dry-cured meat samples. A total of 240 e-nose analyses were carried out using inoculated sausages, seasoned by a laboratory-scale process and sampled at 5, 7, 10 and 14 days. DFA provided calibration models that permitted discrimination of dry-cured meat samples after only 5 days of seasoning with mean recognition percentages in calibration and cross-validation of 98 and 88%, respectively. A further validation of the developed e-nose method was performed using 60 dry-cured meat samples produced by an industrial-scale seasoning process showing a total recognition percentage of 73%. The pattern of volatile compounds of dry-cured meat samples was identified and characterized by a developed HS-SPME/GC-MS method. Seven volatile compounds (2-methyl-1-butanol, octane, 1R-α-pinene, d-limonene, undecane, tetradecanal, 9-(Z)-octadecenoic acid methyl ester) allowed discrimination between dry-cured meat samples of classes P and NP. These results demonstrate that MOS-based electronic nose can be a useful tool for a rapid screening in preventing OTA contamination in the cured meat supply chain. Copyright © 2015 Elsevier B.V. All rights reserved.
Ivanova, Anastasia; Tamura, Roy N
2015-12-01
A new clinical trial design, designated the two-way enriched design (TED), is introduced, which augments the standard randomized placebo-controlled trial with second-stage enrichment designs in placebo non-responders and drug responders. The trial is run in two stages. In the first stage, patients are randomized between drug and placebo. In the second stage, placebo non-responders are re-randomized between drug and placebo and drug responders are re-randomized between drug and placebo. All first-stage data, and second-stage data from first-stage placebo non-responders and first-stage drug responders, are utilized in the efficacy analysis. The authors developed one, two and three degrees of freedom score tests for treatment effect in the TED and give formulae for asymptotic power and for sample size computations. The authors compute the optimal allocation ratio between drug and placebo in the first stage for the TED and compare the operating characteristics of the design to the standard parallel clinical trial, placebo lead-in and randomized withdrawal designs. Two motivating examples from different disease areas are presented to illustrate the possible design considerations. © The Author(s) 2011.
[Island flap in the surgical treatment of hypospadias].
Austoni, E; Mantovani, F; Colombo, F; Fenice, O; Mastromarino, G; Vecchio, D; Canclini, L
1994-06-01
Surgery of hypospadias represents an interesting field of innovatory ideas. Many methods may be suitable and many modifications can be performed. There is no one method for all kinds of hypospadias. It is necessary to find the right method for each patient. The result often depends upon the experience of the surgeon with a particular method. The choice between straightening and urethroplasty in one or two stages depends on cost-benefit ratio and evolution at distance of the straightening must be taken into account as well tissue consumption imposed by the urethroplasty, with one stage straightening that makes reintervention very difficult. In the latter case, a multi-stage operation will be necessary with flaps for urethroplasty after the straightening, or, in a more developed penis, a shortening operation according to Nesbit. With two-stage method, in case of relapsed curvature, this can easily be treated, if tissue is available. For a good result of urethroplasty the ability of surgeon, a constant calibration of the canal, plenty of elastic tissue for the neo-urethra, care not to suture on these planes, are highly important. In our opinion Duplay's method observes these requisites. Two-stages surgery allows easy correction of any eventual relapsing incurvature, with no problems for the following urethroplasty. One-stage surgery allows the problems to be resolved in a single surgical Step, but involves the risk of tissue consumption and proximal stricture.
August, Gerald J.; Piehler, Timothy F.; Bloomquist, Michael L.
2014-01-01
OBJECTIVE The development of adaptive treatment strategies (ATS) represents the next step in innovating conduct problems prevention programs within a juvenile diversion context. Towards this goal, we present the theoretical rationale, associated methods, and anticipated challenges for a feasibility pilot study in preparation for implementing a full-scale SMART (i.e., sequential, multiple assignment, randomized trial) for conduct problems prevention. The role of a SMART design in constructing ATS is presented. METHOD The SMART feasibility pilot study includes a sample of 100 youth (13–17 years of age) identified by law enforcement as early stage offenders and referred for pre-court juvenile diversion programming. Prior data on the sample population detail a high level of ethnic diversity and approximately equal representations of both genders. Within the SMART, youth and their families are first randomly assigned to one of two different brief-type evidence-based prevention programs, featuring parent-focused behavioral management or youth-focused strengths-building components. Youth who do not respond sufficiently to brief first-stage programming will be randomly assigned a second time to either an extended parent- or youth-focused second-stage programming. Measures of proximal intervention response and measures of potential candidate tailoring variables for developing ATS within this sample are detailed. RESULTS Results of the described pilot study will include information regarding feasibility and acceptability of the SMART design. This information will be used to refine a subsequent full-scale SMART. CONCLUSIONS The use of a SMART to develop ATS for prevention will increase the efficiency and effectiveness of prevention programing for youth with developing conduct problems. PMID:25256135
A Comparison of IRT Proficiency Estimation Methods under Adaptive Multistage Testing
ERIC Educational Resources Information Center
Kim, Sooyeon; Moses, Tim; Yoo, Hanwook
2015-01-01
This inquiry is an investigation of item response theory (IRT) proficiency estimators' accuracy under multistage testing (MST). We chose a two-stage MST design that includes four modules (one at Stage 1, three at Stage 2) and three difficulty paths (low, middle, high). We assembled various two-stage MST panels (i.e., forms) by manipulating two…
USDA-ARS?s Scientific Manuscript database
Calves weaned using a two-stage method where nursing is prevented between cow-calf pairs prior to separation (Stage 1) experience less weaning stress after separation (Stage 2) based on behavior and growth measures. The aim of this study was to document changes in various physiological measures of s...
NASA Astrophysics Data System (ADS)
Sheikholeslami, R.; Hosseini, N.; Razavi, S.
2016-12-01
Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).
Srivastava, Ashutosh K.; Rai, Satyajeet; Srivastava, M. K.; Lohani, M.; Mudiam, M. K. R.; Srivastava, L. P.
2014-01-01
A total of 162 samples of different varieties of mango: Deshehari, Langra, Safeda in three growing stages (Pre-mature, Unripe and Ripe) were collected from Lucknow, India, and analyzed for the presence of seventeen organophosphate pesticide residues. The QuEChERS (Quick, Easy, Cheap, Effective, Rugged and Safe) method of extraction coupled with gas chromatography was validated for pesticides and qualitatively confirmed by gas chromatography- mass spectrometry. The method was validated with different concentrations of mixture of seventeen organophosphate pesticides (0.05, 0.10, 0.50 mg kg−1) in mango. The average recovery varied from 70.20% to 95.25% with less than 10% relative standard deviation. The limit of quantification of different pesticides ranged from 0.007 to 0.033 mg kg−1. Out of seventeen organophosphate pesticides only malathion and chlorpyriphos were detected. Approximately 20% of the mango samples have shown the presence of these two pesticides. The malathion residues ranged from ND-1.407 mg kg−1 and chlorpyriphos ND-0.313 mg kg−1 which is well below the maximum residues limit (PFA-1954). In three varieties of mango at different stages from unpeeled to peeled sample reduction of malathion and chlorpyriphos ranged from 35.48%–100% and 46.66%–100% respectively. The estimated daily intake of malathion ranged from 0.032 to 0.121 µg kg−1 and chlorpyriphos ranged from zero to 0.022 µg kg−1 body weight from three different stages of mango. The hazard indices ranged from 0.0015 to 0.0060 for malathion and zero to 0.0022 for chlorpyriphos. It is therefore indicated that seasonal consumption of these three varieties of mango may not pose any health hazards for the population of Lucknow, city, India because the hazard indices for malathion and chlorpyriphos residues were below to one. PMID:24809911
Visualization and quantification of three-dimensional distribution of yeast in bread dough.
Maeda, Tatsuro; DO, Gab-Soo; Sugiyama, Junichi; Araki, Tetsuya; Tsuta, Mizuki; Shiraga, Seizaburo; Ueda, Mitsuyoshi; Yamada, Masaharu; Takeya, Koji; Sagara, Yasuyuki
2009-07-01
A three-dimensional (3-D) bio-imaging technique was developed for visualizing and quantifying the 3-D distribution of yeast in frozen bread dough samples in accordance with the progress of the mixing process of the samples, applying cell-surface engineering to the surfaces of the yeast cells. The fluorescent yeast was recognized as bright spots at the wavelength of 520 nm. Frozen dough samples were sliced at intervals of 1 microm by an micro-slicer image processing system (MSIPS) equipped with a fluorescence microscope for acquiring cross-sectional images of the samples. A set of successive two-dimensional images was reconstructed to analyze the 3-D distribution of the yeast. The average shortest distance between centroids of enhanced green fluorescent protein (EGFP) yeasts was 10.7 microm at the pick-up stage, 9.7 microm at the clean-up stage, 9.0 microm at the final stage, and 10.2 microm at the over-mixing stage. The results indicated that the distribution of the yeast cells was the most uniform in the dough of white bread at the final stage, while the heterogeneous distribution at the over-mixing stage was possibly due to the destruction of the gluten network structure within the samples.
Gao, Liang; Chen, Xiangfei; Xiong, Jintian; Liu, Shengchun; Pu, Tao
2012-01-30
Based on reconstruction-equivalent-chirp (REC) technique, a novel solution for fabricating low-cost long fiber Bragg gratings (FBGs) with desired properties is proposed and initially studied. A proof-of-concept experiment is demonstrated with two conventional uniform phase masks and a submicron-precision translation stage, successfully. It is shown that the original phase shift (OPS) caused by phase mismatch of the two phase masks can be compensated by the equivalent phase shift (EPS) at the ±1st channels of sampled FBGs, separately. Furthermore, as an example, a π phase-shifted FBG of about 90 mm is fabricated by using these two 50mm-long uniform phase masks based on the presented method.
NASA Astrophysics Data System (ADS)
Filippa, Gianluca; Cremonese, Edoardo; Galvagno, Marta; Migliavacca, Mirco; Morra di Cella, Umberto; Petey, Martina; Siniscalco, Consolata
2015-12-01
The increasingly important effect of climate change and extremes on alpine phenology highlights the need to establish accurate monitoring methods to track inter-annual variation (IAV) and long-term trends in plant phenology. We evaluated four different indices of phenological development (two for plant productivity, i.e., green biomass and leaf area index; two for plant greenness, i.e., greenness from visual inspection and from digital images) from a 5-year monitoring of ecosystem phenology, here defined as the seasonal development of the grassland canopy, in a subalpine grassland site (NW Alps). Our aim was to establish an effective observation strategy that enables the detection of shifts in grassland phenology in response to climate trends and meteorological extremes. The seasonal development of the vegetation at this site appears strongly controlled by snowmelt mostly in its first stages and to a lesser extent in the overall development trajectory. All indices were able to detect an anomalous beginning of the growing season in 2011 due to an exceptionally early snowmelt, whereas only some of them revealed a later beginning of the growing season in 2013 due to a late snowmelt. A method is developed to derive the number of samples that maximise the trade-off between sampling effort and accuracy in IAV detection in the context of long-term phenology monitoring programmes. Results show that spring phenology requires a smaller number of samples than autumn phenology to track a given target of IAV. Additionally, productivity indices (leaf area index and green biomass) have a higher sampling requirement than greenness derived from visual estimation and from the analysis of digital images. Of the latter two, the analysis of digital images stands out as the more effective, rapid and objective method to detect IAV in vegetation development.
Filippa, Gianluca; Cremonese, Edoardo; Galvagno, Marta; Migliavacca, Mirco; Morra di Cella, Umberto; Petey, Martina; Siniscalco, Consolata
2015-12-01
The increasingly important effect of climate change and extremes on alpine phenology highlights the need to establish accurate monitoring methods to track inter-annual variation (IAV) and long-term trends in plant phenology. We evaluated four different indices of phenological development (two for plant productivity, i.e., green biomass and leaf area index; two for plant greenness, i.e., greenness from visual inspection and from digital images) from a 5-year monitoring of ecosystem phenology, here defined as the seasonal development of the grassland canopy, in a subalpine grassland site (NW Alps). Our aim was to establish an effective observation strategy that enables the detection of shifts in grassland phenology in response to climate trends and meteorological extremes. The seasonal development of the vegetation at this site appears strongly controlled by snowmelt mostly in its first stages and to a lesser extent in the overall development trajectory. All indices were able to detect an anomalous beginning of the growing season in 2011 due to an exceptionally early snowmelt, whereas only some of them revealed a later beginning of the growing season in 2013 due to a late snowmelt. A method is developed to derive the number of samples that maximise the trade-off between sampling effort and accuracy in IAV detection in the context of long-term phenology monitoring programmes. Results show that spring phenology requires a smaller number of samples than autumn phenology to track a given target of IAV. Additionally, productivity indices (leaf area index and green biomass) have a higher sampling requirement than greenness derived from visual estimation and from the analysis of digital images. Of the latter two, the analysis of digital images stands out as the more effective, rapid and objective method to detect IAV in vegetation development.
Monitoring multiple components in vinegar fermentation using Raman spectroscopy.
Uysal, Reyhan Selin; Soykut, Esra Acar; Boyaci, Ismail Hakki; Topcu, Ali
2013-12-15
In this study, the utility of Raman spectroscopy (RS) with chemometric methods for quantification of multiple components in the fermentation process was investigated. Vinegar, the product of a two stage fermentation, was used as a model and glucose and fructose consumption, ethanol production and consumption and acetic acid production were followed using RS and the partial least squares (PLS) method. Calibration of the PLS method was performed using model solutions. The prediction capability of the method was then investigated with both model and real samples. HPLC was used as a reference method. The results from comparing RS-PLS and HPLC with each other showed good correlations were obtained between predicted and actual sample values for glucose (R(2)=0.973), fructose (R(2)=0.988), ethanol (R(2)=0.996) and acetic acid (R(2)=0.983). In conclusion, a combination of RS with chemometric methods can be applied to monitor multiple components of the fermentation process from start to finish with a single measurement in a short time. Copyright © 2013 Elsevier Ltd. All rights reserved.
Véliz, Pedro L; Berra, Esperanza M; Jorna, Ana R
2015-07-01
INTRODUCTION Medical specialties' core curricula should take into account functions to be carried out, positions to be filled and populations to be served. The functions in the professional profile for specialty training of Cuban intensive care and emergency medicine specialists do not include all the activities that they actually perform in professional practice. OBJECTIVE Define the specific functions and procedural skills required of Cuban specialists in intensive care and emergency medicine. METHODS The study was conducted from April 2011 to September 2013. A three-stage methodological strategy was designed using qualitative techniques. By purposive maximum variation sampling, 82 professionals were selected. Documentary analysis and key informant criteria were used in the first stage. Two expert groups were formed in the second stage: one used various group techniques (focus group, oral and written brainstorming) and the second used a three-round Delphi method. In the final stage, a third group of experts was questioned in semistructured in-depth interviews, and a two-round Delphi method was employed to assess priorities. RESULTS Ultimately, 78 specific functions were defined: 47 (60.3%) patient care, 16 (20.5%) managerial, 6 (7.7%) teaching, and 9 (11.5%) research. Thirty-one procedural skills were identified. The specific functions and procedural skills defined relate to the profession's requirements in clinical care of the critically ill, management of patient services, teaching and research at the specialist's different occupational levels. CONCLUSIONS The specific functions and procedural skills required of intensive care and emergency medicine specialists were precisely identified by a scientific method. This product is key to improving the quality of teaching, research, administration and patient care in this specialty in Cuba. The specific functions and procedural skills identified are theoretical, practical, methodological and social contributions to inform future curricular reform and to help intensive care specialists enhance their performance in comprehensive patient care. KEYWORDS Intensive care, urgent care, emergency medicine, continuing medical education, curriculum, diagnostic techniques and procedures, medical residency, Cuba.
Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L
2015-02-01
Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.
Residential Two-Stage Gas Furnaces - Do They Save Energy?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lekov, Alex; Franco, Victor; Lutz, James
2006-05-12
Residential two-stage gas furnaces account for almost a quarter of the total number of models listed in the March 2005 GAMA directory of equipment certified for sale in the United States. Two-stage furnaces are expanding their presence in the market mostly because they meet consumer expectations for improved comfort. Currently, the U.S. Department of Energy (DOE) test procedure serves as the method for reporting furnace total fuel and electricity consumption under laboratory conditions. In 2006, American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE) proposed an update to its test procedure which corrects some of the discrepancies found in themore » DOE test procedure and provides an improved methodology for calculating the energy consumption of two-stage furnaces. The objectives of this paper are to explore the differences in the methods for calculating two-stage residential gas furnace energy consumption in the DOE test procedure and in the 2006 ASHRAE test procedure and to compare test results to research results from field tests. Overall, the DOE test procedure shows a reduction in the total site energy consumption of about 3 percent for two-stage compared to single-stage furnaces at the same efficiency level. In contrast, the 2006 ASHRAE test procedure shows almost no difference in the total site energy consumption. The 2006 ASHRAE test procedure appears to provide a better methodology for calculating the energy consumption of two-stage furnaces. The results indicate that, although two-stage technology by itself does not save site energy, the combination of two-stage furnaces with BPM motors provides electricity savings, which are confirmed by field studies.« less
Chang, Min; Li, Yongchao; Angeles, Reginald; Khan, Samina; Chen, Lian; Kaplan, Julia; Yang, Liyu
2011-08-01
Two approaches to monitor the matrix effect on ionization in study samples were described. One approach is the addition of multiple reaction monitoring transitions to the bioanalytical methods to monitor the presence of known ionization modification-causing components of the matrix, for example, m/z 184→125 (or m/z 184→184) and m/z 133→89 may be used for phospholipids and polyethylene oxide containing surfactants, respectively. This approach requires no additional equipment and can be readily adapted for most method. The approach detects only the intended interfering compounds and provides little quantitative indication if the matrix effect is within the tolerable range (±15%). The other approach requires the addition of an infusion pump and identifies an appropriate surrogate of the analyte to be infused for the determination of modification on the ionization of the analyte. The second approach detects interferences in the sample regardless of the sources (i.e., dosing vehicle components, co-administrated drugs, their metabolites, phospholipids, plasticizers and endogenous components introduced due to disease stage).
Power system frequency estimation based on an orthogonal decomposition method
NASA Astrophysics Data System (ADS)
Lee, Chih-Hung; Tsai, Men-Shen
2018-06-01
In recent years, several frequency estimation techniques have been proposed by which to estimate the frequency variations in power systems. In order to properly identify power quality issues under asynchronously-sampled signals that are contaminated with noise, flicker, and harmonic and inter-harmonic components, a good frequency estimator that is able to estimate the frequency as well as the rate of frequency changes precisely is needed. However, accurately estimating the fundamental frequency becomes a very difficult task without a priori information about the sampling frequency. In this paper, a better frequency evaluation scheme for power systems is proposed. This method employs a reconstruction technique in combination with orthogonal filters, which may maintain the required frequency characteristics of the orthogonal filters and improve the overall efficiency of power system monitoring through two-stage sliding discrete Fourier transforms. The results showed that this method can accurately estimate the power system frequency under different conditions, including asynchronously sampled signals contaminated by noise, flicker, and harmonic and inter-harmonic components. The proposed approach also provides high computational efficiency.
Automatic Polyp Detection via A Novel Unified Bottom-up and Top-down Saliency Approach.
Yuan, Yixuan; Li, Dengwang; Meng, Max Q-H
2017-07-31
In this paper, we propose a novel automatic computer-aided method to detect polyps for colonoscopy videos. To find the perceptually and semantically meaningful salient polyp regions, we first segment images into multilevel superpixels. Each level corresponds to different sizes of superpixels. Rather than adopting hand-designed features to describe these superpixels in images, we employ sparse autoencoder (SAE) to learn discriminative features in an unsupervised way. Then a novel unified bottom-up and top-down saliency method is proposed to detect polyps. In the first stage, we propose a weak bottom-up (WBU) saliency map by fusing the contrast based saliency and object-center based saliency together. The contrast based saliency map highlights image parts that show different appearances compared with surrounding areas while the object-center based saliency map emphasizes the center of the salient object. In the second stage, a strong classifier with Multiple Kernel Boosting (MKB) is learned to calculate the strong top-down (STD) saliency map based on samples directly from the obtained multi-level WBU saliency maps. We finally integrate these two stage saliency maps from all levels together to highlight polyps. Experiment results achieve 0.818 recall for saliency calculation, validating the effectiveness of our method. Extensive experiments on public polyp datasets demonstrate that the proposed saliency algorithm performs favorably against state-of-the-art saliency methods to detect polyps.
Systems Biology and Ratio-Based, Real-Time Disease Surveillance.
Fair, J M; Rivas, A L
2015-08-01
Most infectious disease surveillance methods are not well fit for early detection. To address such limitation, here we evaluated a ratio- and Systems Biology-based method that does not require prior knowledge on the identity of an infective agent. Using a reference group of birds experimentally infected with West Nile virus (WNV) and a problem group of unknown health status (except that they were WNV-negative and displayed inflammation), both groups were followed over 22 days and tested with a system that analyses blood leucocyte ratios. To test the ability of the method to discriminate small data sets, both the reference group (n = 5) and the problem group (n = 4) were small. The questions of interest were as follows: (i) whether individuals presenting inflammation (disease-positive or D+) can be distinguished from non-inflamed (disease-negative or D-) birds, (ii) whether two or more D+ stages can be detected and (iii) whether sample size influences detection. Within the problem group, the ratio-based method distinguished the following: (i) three (one D- and two D+) data classes; (ii) two (early and late) inflammatory stages; (iii) fast versus regular or slow responders; and (iv) individuals that recovered from those that remained inflamed. Because ratios differed in larger magnitudes (up to 48 times larger) than percentages, it is suggested that data patterns are likely to be recognized when disease surveillance methods are designed to measure inflammation and utilize ratios. Published 2013. This article is a U.S. Government work and is in the public domain in the USA.
Vodnick, David James; Dwivedi, Arpit; Keranen, Lucas Paul; Okerlund, Michael David; Schmitz, Roger William; Warren, Oden Lee; Young, Christopher David
2014-07-08
An automated testing system includes systems and methods to facilitate inline production testing of samples at a micro (multiple microns) or less scale with a mechanical testing instrument. In an example, the system includes a probe changing assembly for coupling and decoupling a probe of the instrument. The probe changing assembly includes a probe change unit configured to grasp one of a plurality of probes in a probe magazine and couple one of the probes with an instrument probe receptacle. An actuator is coupled with the probe change unit, and the actuator is configured to move and align the probe change unit with the probe magazine and the instrument probe receptacle. In another example, the automated testing system includes a multiple degree of freedom stage for aligning a sample testing location with the instrument. The stage includes a sample stage and a stage actuator assembly including translational and rotational actuators.
Vodnick, David James; Dwivedi, Arpit; Keranen, Lucas Paul; Okerlund, Michael David; Schmitz, Roger William; Warren, Oden Lee; Young, Christopher David
2015-01-27
An automated testing system includes systems and methods to facilitate inline production testing of samples at a micro (multiple microns) or less scale with a mechanical testing instrument. In an example, the system includes a probe changing assembly for coupling and decoupling a probe of the instrument. The probe changing assembly includes a probe change unit configured to grasp one of a plurality of probes in a probe magazine and couple one of the probes with an instrument probe receptacle. An actuator is coupled with the probe change unit, and the actuator is configured to move and align the probe change unit with the probe magazine and the instrument probe receptacle. In another example, the automated testing system includes a multiple degree of freedom stage for aligning a sample testing location with the instrument. The stage includes a sample stage and a stage actuator assembly including translational and rotational actuators.
Vodnick, David James; Dwivedi, Arpit; Keranen, Lucas Paul; Okerlund, Michael David; Schmitz, Roger William; Warren, Oden Lee; Young, Christopher David
2015-02-24
An automated testing system includes systems and methods to facilitate inline production testing of samples at a micro (multiple microns) or less scale with a mechanical testing instrument. In an example, the system includes a probe changing assembly for coupling and decoupling a probe of the instrument. The probe changing assembly includes a probe change unit configured to grasp one of a plurality of probes in a probe magazine and couple one of the probes with an instrument probe receptacle. An actuator is coupled with the probe change unit, and the actuator is configured to move and align the probe change unit with the probe magazine and the instrument probe receptacle. In another example, the automated testing system includes a multiple degree of freedom stage for aligning a sample testing location with the instrument. The stage includes a sample stage and a stage actuator assembly including translational and rotational actuators.
Effects of metal surface grinding at the porcelain try-in stage of fixed dental prostheses
Kesim, Bülent; Gümüş, Hasan Önder; Dinçel, Mehmet; Erkaya, Selçuk
2014-01-01
PURPOSE This study was to evaluate the effect of grinding of the inner metal surface during the porcelain try-in stage on metal-porcelain bonding considering the maximum temperature and the vibration of samples. MATERIALS AND METHODS Ninety-one square prism-shaped (1 × 1 × 1.5 mm) nickel-chrome cast frameworks 0.3 mm thick were prepared. Porcelain was applied on two opposite outer axial surfaces of the frameworks. The grinding was performed from the opposite axial sides of the inner metal surfaces with a low-speed handpiece with two types of burs (diamond, tungsten-carbide) under three grinding forces (3.5 N, 7 N, 14 N) and at two durations (5 seconds, 10 seconds). The shear bond strength (SBS) test was performed with universal testing machine. Statistical analyzes were performed at 5% significance level. RESULTS The samples subjected to grinding under 3.5 N showed higher SBS values than those exposed to grinding under 7 N and 14 N (P<.05). SBS values of none of the groups differed from those of the control group (P>.05). The types of bur (P=.965) and the duration (P=.679) did not affect the SBS values. On the other hand, type of bur, force applied, and duration of the grinding affected the maximum temperatures of the samples, whereas the maximum vibration was affected only by the type of bur (P<.05). CONCLUSION Grinding the inner metal surface did not affect the metal-porcelain bond strength. Although the grinding affected the maximum temperature and the vibration values of the samples, these did not influence the bonding strength. PMID:25177476
A radiographic study estimating age of mandibular third molars by periodontal ligament visibility.
Chaudhary, M A; Liversidge, H M
2017-12-01
Visibility of the periodontal ligament of mandibular third molars (M3) has been suggested as a method to estimate age. To assess the accuracy of this method and compare the visibility of the periodontal ligament in the left M3 with the right M3. The sample was archived panoramic dental radiographs of 163 individuals (75 males, 88 females, age 16-53 years) with mature M3's. Reliability was assessed using Kappa. Accuracy was assessed by subtracting chronological age from estimated age for males and females. Stages were cross-tabulated against age stages younger than and at least 18 and 21 years of age. Stages were compared in the left M3 and right M3. Analysis showed excellent intra-observer reliability. Mean difference between estimated and chronological ages was 7.21 years (SD 5.16) for left M3 and 7.69 (SD 6.08) for right M3 in males and 6.87 (SD 5.83) for left M3 and 8.61 (SD 6.58) for right M3 in females. Minimum ages of stages 0 to 2 were younger than previously reported, despite a small sample of individuals younger than 18. The left and right M3 stage differed in 46% of the 85 individuals with readings from both side and estimated age differed from -10.5 to 12.2 years between left and right. Accuracy of this method was between 6 and 8 years with an error of 5 to 6 years. The number of individuals with mature M3 apices younger than 18 years was small. The stage of visibility of the periodontal ligament differed between left and right in almost half of our sample with both teeth present. Our findings question the use of this method to estimate age or to discriminate between age younger and at least 18 years.
Neocleous, A C; Syngelaki, A; Nicolaides, K H; Schizas, C N
2018-04-01
To estimate the risk of fetal trisomy 21 (T21) and other chromosomal abnormalities (OCA) at 11-13 weeks' gestation using computational intelligence classification methods. As a first step, a training dataset consisting of 72 054 euploid pregnancies, 295 cases of T21 and 305 cases of OCA was used to train an artificial neural network. Then, a two-stage approach was used for stratification of risk and diagnosis of cases of aneuploidy in the blind set. In Stage 1, using four markers, pregnancies in the blind set were classified into no risk and risk. No-risk pregnancies were not examined further, whereas the risk pregnancies were forwarded to Stage 2 for further examination. In Stage 2, using seven markers, pregnancies were classified into three types of risk, namely no risk, moderate risk and high risk. Of 36 328 unknown to the system pregnancies (blind set), 17 512 euploid, two T21 and 18 OCA were classified as no risk in Stage 1. The remaining 18 796 cases were forwarded to Stage 2, of which 7895 euploid, two T21 and two OCA cases were classified as no risk, 10 464 euploid, 83 T21 and 61 OCA as moderate risk and 187 euploid, 50 T21 and 52 OCA as high risk. The sensitivity and the specificity for T21 in Stage 2 were 97.1% and 99.5%, respectively, and the false-positive rate from Stage 1 to Stage 2 was reduced from 51.4% to ∼1%, assuming that the cell-free DNA test could identify all euploid and aneuploid cases. We propose a method for early diagnosis of chromosomal abnormalities that ensures that most T21 cases are classified as high risk at any stage. At the same time, the number of euploid cases subjected to invasive or cell-free DNA examinations was minimized through a routine procedure offered in two stages. Our method is minimally invasive and of relatively low cost, highly effective at T21 identification and it performs better than do other existing statistical methods. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2017 ISUOG. Published by John Wiley & Sons Ltd.
Murty, PhD, Komanduri S.; Husain, PhD, Muhammad Jami; Bashir, MSC, Rizwan; Blutcher-Nelson, BSc, Glenda; Benjakul, PhD, Sarunya; Kengganpanich, PhD, Mondha; Erguder, MD, PhD, Toker; Keskinkilic, MD, Bekir; Polat, MD, Sertac; Sinha, MD, PhD, Dhirendra N.; Palipudi, PhD, Krishna; Ahluwalia, PhD, Indu B.
2017-01-01
Objective The World Health Organization recommends that smokers be offered help to quit. A better understanding of smokers’ interest in and commitment to quitting could guide tobacco control efforts. We assessed temporal differences in stages of change toward quitting among smokers in Thailand and Turkey. Methods Two waves (independent samples) of data from the Global Adult Tobacco Survey, a national household survey of adults aged 15 years or older, were assessed for Thailand (2009 and 2011) and Turkey (2008 and 2012). Current smokers were categorized into 3 stages of change based on their cessation status: precontemplation, contemplation, and preparation. Relative change in the proportion of smokers in each stage between waves 1 and 2 was computed for each country. Results Between waves, overall current tobacco smoking did not change in Thailand (23.7% to 24.0%) but declined in Turkey (31.2% to 27.1%; P < .001). Between 2009 and 2011, precontemplation increased among smokers in Thailand (76.1% to 85.4%; P < .001), whereas contemplation (17.6% to 12.0%; P < .001) and preparation (6.3% to 2.6%; P < .001) declined. Between 2008 and 2012, there were declines in precontemplation among smokers in Turkey (72.2% to 64.6%; P < .001), whereas there were increases in contemplation (21.2% to 26.9%; P = .008) and no significant change in preparation (6.5% to 8.5%; P = .097). Conclusion Nearly two-thirds of smokers in Turkey and more than two-thirds in Thailand were in the precontemplation stage during the last survey wave assessed. The proportion of smokers in the preparation stage increased in Turkey but declined in Thailand. Identifying stages of cessation helps guide population-based targeted interventions to support smokers at varying stages of change toward quitting. PMID:28570209
Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.
Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham
2017-12-01
During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Willis, Susan F; Barton, Desmond; Ind, Thomas EJ
2006-01-01
Background The purpose of the study was to determine the outcome of all patients with endometrial adenocarcinoma cancer treated by laparoscopic hysterectomy at our institution, many of whom were high-risk for surgery. Methods Data was collected by a retrospective search of the case notes and Electronic Patient Records of the thirty eight patients who underwent laparoscopic hysterectomy for endometrial cancer at our institutions. Results The median body mass index was 30 (range 19–67). Comorbidities were present in 76% (29 patients); 40% (15 patients) had a single comorbid condition, whilst 18% (7 patients) had two, and a further 18% (7 patients) had more than two. Lymphadenectomy was performed in 45% (17 patients), and lymph node sampling in 21% (8 patients). Median operating time was 210 minutes (range 70–360 minutes). Median estimated blood loss was 200 ml (range 50–1000 ml). There were no intraoperative complications. Post-operative complications were seen in 21% (2 major, 6 minor). Blood transfusion was required in 5% (2 patients). The median stay was 4 post-operative nights (range 1–25 nights). In those patients undergoing lymphadenectomy, the mean number of nodes taken was fifteen (range 8–26 nodes). The pathological staging was FIGO stage I 76% (29 patients), stage II 8% (3 patients), stage III 16% (6 patients). The pathological grade was G1 31% (16 patients), G2 45% (17 patients), G3 24% (8 patients). Conclusion Laparoscopic hysterectomy can be safely carried out in patients at high risk for surgery, with no compromise in terms of outcomes, whilst providing all the benefits inherent in minimal access surgery. PMID:16968556
Le-Minh, Nhat; Stuetz, Richard M; Khan, Stuart J
2012-01-30
A highly sensitive method for the analysis of six sulfonamide antibiotics (sulfadiazine, sulfathiazole, sulfapyridine, sulfamerazine, sulfamethazine and sulfamethoxazole), two sulfonamide metabolites (N(4)-acetyl sulfamethazine and N(4)-acetyl sulfamethoxazole) and the commonly co-applied antibiotic trimethoprim was developed for the analysis of complex wastewater samples. The method involves solid phase extraction of filtered wastewater samples followed by liquid chromatography-tandem mass spectral detection. Method detection limits were shown to be matrix-dependent but ranged between 0.2 and 0.4 ng/mL for ultrapure water, 0.4 and 0.7 ng/mL for tap water, 1.4 and 5.9 ng/mL for a laboratory-scale membrane bioreactor (MBR) mixed liquor, 0.7 and 1.7 ng/mL for biologically treated effluent and 0.5 and 1.5 ng/g dry weight for MBR activated sludge. An investigation of analytical matrix effects was undertaken, demonstrating the significant and largely unpredictable nature of signal suppression observed for variably complex matrices compared to an ultrapure water matrix. The results demonstrate the importance of accounting for such matrix effects for accurate quantitation, as done in the presented method by isotope dilution. Comprehensive validation of calibration linearity, reproducibility, extraction recovery, limits of detection and quantification are also presented. Finally, wastewater samples from a variety of treatment stages in a full-scale wastewater treatment plant were analysed to illustrate the effectiveness of the method. Copyright © 2011 Elsevier B.V. All rights reserved.
Health condition identification of multi-stage planetary gearboxes using a mRVM-based method
NASA Astrophysics Data System (ADS)
Lei, Yaguo; Liu, Zongyao; Wu, Xionghui; Li, Naipeng; Chen, Wu; Lin, Jing
2015-08-01
Multi-stage planetary gearboxes are widely applied in aerospace, automotive and heavy industries. Their key components, such as gears and bearings, can easily suffer from damage due to tough working environment. Health condition identification of planetary gearboxes aims to prevent accidents and save costs. This paper proposes a method based on multiclass relevance vector machine (mRVM) to identify health condition of multi-stage planetary gearboxes. In this method, a mRVM algorithm is adopted as a classifier, and two features, i.e. accumulative amplitudes of carrier orders (AACO) and energy ratio based on difference spectra (ERDS), are used as the input of the classifier to classify different health conditions of multi-stage planetary gearboxes. To test the proposed method, seven health conditions of a two-stage planetary gearbox are considered and vibration data is acquired from the planetary gearbox under different motor speeds and loading conditions. The results of three tests based on different data show that the proposed method obtains an improved identification performance and robustness compared with the existing method.
Rietveld, Cornelius A.; Esko, Tõnu; Davies, Gail; Pers, Tune H.; Turley, Patrick; Benyamin, Beben; Chabris, Christopher F.; Emilsson, Valur; Johnson, Andrew D.; Lee, James J.; de Leeuw, Christiaan; Marioni, Riccardo E.; Medland, Sarah E.; Miller, Michael B.; Rostapshova, Olga; van der Lee, Sven J.; Vinkhuyzen, Anna A. E.; Amin, Najaf; Conley, Dalton; Derringer, Jaime; van Duijn, Cornelia M.; Fehrmann, Rudolf; Franke, Lude; Glaeser, Edward L.; Hansell, Narelle K.; Hayward, Caroline; Iacono, William G.; Ibrahim-Verbaas, Carla; Jaddoe, Vincent; Karjalainen, Juha; Laibson, David; Lichtenstein, Paul; Liewald, David C.; Magnusson, Patrik K. E.; Martin, Nicholas G.; McGue, Matt; McMahon, George; Pedersen, Nancy L.; Pinker, Steven; Porteous, David J.; Posthuma, Danielle; Rivadeneira, Fernando; Smith, Blair H.; Starr, John M.; Tiemeier, Henning; Timpson, Nicholas J.; Trzaskowski, Maciej; Uitterlinden, André G.; Verhulst, Frank C.; Ward, Mary E.; Wright, Margaret J.; Davey Smith, George; Deary, Ian J.; Johannesson, Magnus; Plomin, Robert; Visscher, Peter M.; Benjamin, Daniel J.; Koellinger, Philipp D.
2014-01-01
We identify common genetic variants associated with cognitive performance using a two-stage approach, which we call the proxy-phenotype method. First, we conduct a genome-wide association study of educational attainment in a large sample (n = 106,736), which produces a set of 69 education-associated SNPs. Second, using independent samples (n = 24,189), we measure the association of these education-associated SNPs with cognitive performance. Three SNPs (rs1487441, rs7923609, and rs2721173) are significantly associated with cognitive performance after correction for multiple hypothesis testing. In an independent sample of older Americans (n = 8,652), we also show that a polygenic score derived from the education-associated SNPs is associated with memory and absence of dementia. Convergent evidence from a set of bioinformatics analyses implicates four specific genes (KNCMA1, NRXN1, POU2F3, and SCRT). All of these genes are associated with a particular neurotransmitter pathway involved in synaptic plasticity, the main cellular mechanism for learning and memory. PMID:25201988
Perceived Discrimination and Ethnic Identity Among Breast Cancer Survivors
Campesino, Maureen; Saenz, Delia S.; Choi, Myunghan; Krouse, Robert S.
2012-01-01
Purpose/Objectives To examine ethnic identity and sociodemographic factors in minority patients' perceptions of healthcare discrimination in breast cancer care. Design Mixed methods. Setting Participants' homes in the metropolitan areas of Phoenix and Tucson, AZ. Sample 39 women treated for breast cancer in the past six years: 15 monolingual Spanish-speaking Latinas, 15 English-speaking Latinas, and 9 African Americans. Methods Two questionnaires were administered. Individual interviews with participants were conducted by nurse researchers. Quantitative, qualitative, and matrix analytic methods were used. Main Research Variables Ethnic identity and perceptions of discrimination. Findings Eighteen women (46%) believed race and spoken language affected the quality of health care. Perceived disrespect from providers was attributed to participant's skin color, income level, citizenship status, and ability to speak English. Discrimination was more likely to be described in a primary care context, rather than cancer care. Ethnic identity and early-stage breast cancer diagnosis were the only study variables significantly associated with perceived healthcare discrimination. Conclusions This article describes the first investigation examining ethnic identity and perceived discrimination in cancer care delivery. Replication of this study with larger samples is needed to better understand the role of ethnic identity and cancer stage in perceptions of cancer care delivery. Implications for Nursing Identification of ethnic-specific factors that influence patient's perspectives and healthcare needs will facilitate development of more effective strategies for the delivery of cross-cultural patient-centered cancer care. PMID:22374505
Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure
Salehi, M.; Smith, D.R.
2005-01-01
Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.
Selection of the initial design for the two-stage continual reassessment method.
Jia, Xiaoyu; Ivanova, Anastasia; Lee, Shing M
2017-01-01
In the two-stage continual reassessment method (CRM), model-based dose escalation is preceded by a pre-specified escalating sequence starting from the lowest dose level. This is appealing to clinicians because it allows a sufficient number of patients to be assigned to each of the lower dose levels before escalating to higher dose levels. While a theoretical framework to build the two-stage CRM has been proposed, the selection of the initial dose-escalating sequence, generally referred to as the initial design, remains arbitrary, either by specifying cohorts of three patients or by trial and error through extensive simulations. Motivated by a currently ongoing oncology dose-finding study for which clinicians explicitly stated their desire to assign at least one patient to each of the lower dose levels, we proposed a systematic approach for selecting the initial design for the two-stage CRM. The initial design obtained using the proposed algorithm yields better operating characteristics compared to using a cohort of three initial design with a calibrated CRM. The proposed algorithm simplifies the selection of initial design for the two-stage CRM. Moreover, initial designs to be used as reference for planning a two-stage CRM are provided.
Development of a Multiple-Stage Differential Mobility Analyzer (MDMA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Da-Ren; Cheng, Mengdawn
2007-01-01
A new DMA column has been designed with the capability of simultaneously extracting monodisperse particles of different sizes in multiple stages. We call this design a multistage DMA, or MDMA. A prototype MDMA has been constructed and experimentally evaluated in this study. The new column enables the fast measurement of particles in a wide size range, while preserving the powerful particle classification function of a DMA. The prototype MDMA has three sampling stages, capable of classifying monodisperse particles of three different sizes simultaneously. The scanning voltage operation of a DMA can be applied to this new column. Each stage ofmore » MDMA column covers a fraction of the entire particle size range to be measured. The covered size fractions of two adjacent stages of the MDMA are designed somewhat overlapped. The arrangement leads to the reduction of scanning voltage range and thus the cycling time of the measurement. The modular sampling stage design of the MDMA allows the flexible configuration of desired particle classification lengths and variable number of stages in the MDMA. The design of our MDMA also permits operation at high sheath flow, enabling high-resolution particle size measurement and/or reduction of the lower sizing limit. Using the tandem DMA technique, the performance of the MDMA, i.e., sizing accuracy, resolution, and transmission efficiency, was evaluated at different ratios of aerosol and sheath flowrates. Two aerosol sampling schemes were investigated. One was to extract aerosol flows at an evenly partitioned flowrate at each stage, and the other was to extract aerosol at a rate the same as the polydisperse aerosol flowrate at each stage. We detail the prototype design of the MDMA and the evaluation result on the transfer functions of the MDMA at different particle sizes and operational conditions.« less
Observer-Pattern Modeling and Slow-Scale Bifurcation Analysis of Two-Stage Boost Inverters
NASA Astrophysics Data System (ADS)
Zhang, Hao; Wan, Xiaojin; Li, Weijie; Ding, Honghui; Yi, Chuanzhi
2017-06-01
This paper deals with modeling and bifurcation analysis of two-stage Boost inverters. Since the effect of the nonlinear interactions between source-stage converter and load-stage inverter causes the “hidden” second-harmonic current at the input of the downstream H-bridge inverter, an observer-pattern modeling method is proposed by removing time variance originating from both fundamental frequency and hidden second harmonics in the derived averaged equations. Based on the proposed observer-pattern model, the underlying mechanism of slow-scale instability behavior is uncovered with the help of eigenvalue analysis method. Then eigenvalue sensitivity analysis is used to select some key system parameters of two-stage Boost inverter, and some behavior boundaries are given to provide some design-oriented information for optimizing the circuit. Finally, these theoretical results are verified by numerical simulations and circuit experiment.
Autofluoresence spectroscopy for in-vivo diagnosis of human oral carcinogenesis
NASA Astrophysics Data System (ADS)
Wang, Chih-Yu; Tsai, Tsuimin; Chen, Hsin-Ming; Kuo, Ying-Shiung; Chen, Chin-Tin; Chiang, Chung-Ping
2002-09-01
An in vivo study of human oral cancer diagnosis by using autofluorescence spectroscopy is presented. A Xenon-lamp with a motor-controlled monochromator was adopted as the excitation light source. We chose the excitation wavelength of 330 nm, and the spectral measurement range was from 340 nm to 601 nm. A Y-type fiber bundle was used to guide the excitation light, and collect the autofluorescence of samples. The emitted light was detected by a motor-controlled monochromator and a PMT. After measurement, the measured sites were sectioned and sent for histological examination. In total 15 normal sites, 30 OSF (oral submucosa fibrosis) sites, 26 EH (epithelial hyperkratosis) sites, 13 ED (epithelial dysplasia) sites, and 13 SCC (squamous cell carcinoma) sites were measured. The discriminant algorithm was established by partial-least squares (PLS) method with cross-validation technique. By extracting the first two t-scores of each sample and make scattering plot, we found that the samples of different cancerous stages were in grouped distinct locations, except that samples of ED and EH were mixed together. It means that this algorithm can be used to classify normal, premalignant, and malignant tissues. We conclude that autofluorescence spectroscopy may be useful for in vivo detection of early stage oral cancer.
Purcell, Maureen K.; Powers, Rachel L.; Besijn, Bonnie; Hershberger, Paul K.
2017-01-01
We report the development and validation of two quantitative PCR (qPCR) assays to detect Nanophyetus salmincola DNA in water samples and in fish and snail tissues. Analytical and diagnostic validation demonstrated good sensitivity, specificity, and repeatability of both qPCR assays. The N. salmincola DNA copy number in kidney tissue was significantly correlated with metacercaria counts based on microscopy. Extraction methods were optimized for the sensitive qPCR detection of N. salmincola DNA in settled water samples. Artificially spiked samples suggested that the 1-cercaria/L threshold corresponded to an estimated log10 copies per liter ≥ 6.0. Significant correlation of DNA copy number per liter and microscopic counts indicated that the estimated qPCR copy number was a good predictor of the number of waterborne cercariae. However, the detection of real-world samples below the estimated 1-cercaria/L threshold suggests that the assays may also detect other N. salmincola life stages, nonintact cercariae, or free DNA that settles with the debris. In summary, the qPCR assays reported here are suitable for identifying and quantifying all life stages of N. salmincola that occur in fish tissues, snail tissues, and water.
Luck, Margaux; Bertho, Gildas; Bateson, Mathilde; Karras, Alexandre; Yartseva, Anastasia; Thervet, Eric
2016-01-01
1H Nuclear Magnetic Resonance (NMR)-based metabolic profiling is very promising for the diagnostic of the stages of chronic kidney disease (CKD). Because of the high dimension of NMR spectra datasets and the complex mixture of metabolites in biological samples, the identification of discriminant biomarkers of a disease is challenging. None of the widely used chemometric methods in NMR metabolomics performs a local exhaustive exploration of the data. We developed a descriptive and easily understandable approach that searches for discriminant local phenomena using an original exhaustive rule-mining algorithm in order to predict two groups of patients: 1) patients having low to mild CKD stages with no renal failure and 2) patients having moderate to established CKD stages with renal failure. Our predictive algorithm explores the m-dimensional variable space to capture the local overdensities of the two groups of patients under the form of easily interpretable rules. Afterwards, a L2-penalized logistic regression on the discriminant rules was used to build predictive models of the CKD stages. We explored a complex multi-source dataset that included the clinical, demographic, clinical chemistry, renal pathology and urine metabolomic data of a cohort of 110 patients. Given this multi-source dataset and the complex nature of metabolomic data, we analyzed 1- and 2-dimensional rules in order to integrate the information carried by the interactions between the variables. The results indicated that our local algorithm is a valuable analytical method for the precise characterization of multivariate CKD stage profiles and as efficient as the classical global model using chi2 variable section with an approximately 70% of good classification level. The resulting predictive models predominantly identify urinary metabolites (such as 3-hydroxyisovalerate, carnitine, citrate, dimethylsulfone, creatinine and N-methylnicotinamide) as relevant variables indicating that CKD significantly affects the urinary metabolome. In addition, the simple knowledge of the concentration of urinary metabolites classifies the CKD stage of the patients correctly. PMID:27861591
Lin, Dongyun; Sun, Lei; Toh, Kar-Ann; Zhang, Jing Bo; Lin, Zhiping
2018-05-01
Automated biomedical image classification could confront the challenges of high level noise, image blur, illumination variation and complicated geometric correspondence among various categorical biomedical patterns in practice. To handle these challenges, we propose a cascade method consisting of two stages for biomedical image classification. At stage 1, we propose a confidence score based classification rule with a reject option for a preliminary decision using the support vector machine (SVM). The testing images going through stage 1 are separated into two groups based on their confidence scores. Those testing images with sufficiently high confidence scores are classified at stage 1 while the others with low confidence scores are rejected and fed to stage 2. At stage 2, the rejected images from stage 1 are first processed by a subspace analysis technique called eigenfeature regularization and extraction (ERE), and then classified by another SVM trained in the transformed subspace learned by ERE. At both stages, images are represented based on two types of local features, i.e., SIFT and SURF, respectively. They are encoded using various bag-of-words (BoW) models to handle biomedical patterns with and without geometric correspondence, respectively. Extensive experiments are implemented to evaluate the proposed method on three benchmark real-world biomedical image datasets. The proposed method significantly outperforms several competing state-of-the-art methods in terms of classification accuracy. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wang, Meishui; Wang, Biao; Zheng, Houbing; Wu, Shanying; Shan, Xiuying; Zhuang, Fulian
2011-12-01
To investigate the method and effectiveness of two-stage operation of auricular reconstruction in treating lobule-type microtia. Between March 2007 and April 2010, 19 patients (19 ears) of lobule-type microtia were treated. There were 13 males and 6 females, aged 5 to 27 years (mean, 12.6 years). Of 19 patients, 11 were less than or equal to 14 years old. The locations were left ear in 9 cases and right ear in 10 cases. Two-stage operation for auricular reconstruction of lobule-type microtia included fabrication and grafting of the costal cartilage framework at the first-stage operation and the ear elevation operation at the second-stage operation. Pseudomonas aeruginosa infection occurred in 1 patient after the first-stage operation, who was not given the second-stage operation. Skin necrosis occurred in 1 patient 8 days after the second-stage operation and healed after symptomatic treatment. Eighteen patients were followed up 6 months to 2 years (mean, 14 months). Retraction of cranioauricular angle and thoracic deformity occurred in 1 patient. The surgical results were satisfactory in the other 17 patients whose reconstructive ear had verisimilar shape and suitable cranioauricular angle. Two-stage operation of auricular reconstruction is considered to be an ideal method for lobule-type microtia.
Pojić, Milica; Rakić, Dušan; Lazić, Zivorad
2015-01-01
A chemometric approach was applied for the optimization of the robustness of the NIRS method for wheat quality control. Due to the high number of experimental (n=6) and response variables to be studied (n=7) the optimization experiment was divided into two stages: screening stage in order to evaluate which of the considered variables were significant, and optimization stage to optimize the identified factors in the previously selected experimental domain. The significant variables were identified by using fractional factorial experimental design, whilst Box-Wilson rotatable central composite design (CCRD) was run to obtain the optimal values for the significant variables. The measured responses included: moisture, protein and wet gluten content, Zeleny sedimentation value and deformation energy. In order to achieve the minimal variation in responses, the optimal factor settings were found by minimizing the propagation of error (POE). The simultaneous optimization of factors was conducted by desirability function. The highest desirability of 87.63% was accomplished by setting up experimental conditions as follows: 19.9°C for sample temperature, 19.3°C for ambient temperature and 240V for instrument voltage. Copyright © 2014 Elsevier B.V. All rights reserved.
Akbarzadeh, Marzieh; Nematollahi, Azar; Farahmand, Mahnaz; Amooee, Sedigheh
2018-01-01
Introduction: The aim of this study was to assess the effect of two-stage warm compress technique on the pain duration of the first and second labor stages and neonatal outcomes. Methods: The clinical trial was done on 150 women (75 subjects in each groups) in Shiraz-affiliated hospitals in 2012 A two-staged warm compress was done for 15-20 minutes in the first and second labor phase (cervical dilatation of 7 and 10 cm with zero status) while the control group received hospital routine care. The duration of labor and Apgar score were evaluated. Results: According to t-test, the average of labor duration was lower in the intervention group compared to the control group at the second stage. However, there was no significant difference for labor duration at the first stage and the first and fifth minute Apgar score. Conclusion: According to the result, this intervention seems a good method for decreasing labor duration at the second stage of parturition. PMID:29637053
Assessment of Nonverbal and Verbal Apraxia in Patients with Parkinson's Disease
Olchik, Maira Rozenfeld; Shumacher Shuh, Artur Francisco; Rieder, Carlos R. M.
2015-01-01
Objective. To assess the presence of nonverbal and verbal apraxia in patients with Parkinson's disease (PD) and analyze the correlation between these conditions and patient age, education, duration of disease, and PD stage, as well as evaluate the correlation between the two types of apraxia and the frequency and types of verbal apraxic errors made by patients in the sample. Method. This was an observational prevalence study. The sample comprised 45 patients with PD seen at the Movement Disorders Clinic of the Clinical Hospital of Porto Alegre, Brazil. Patients were evaluated using the Speech Apraxia Assessment Protocol and PD stages were classified according to the Hoehn and Yahr scale. Results. The rate of nonverbal apraxia and verbal apraxia in the present sample was 24.4%. Verbal apraxia was significantly correlated with education (p ≤ 0.05). The most frequent types of verbal apraxic errors were omissions (70.8%). The analysis of manner and place of articulation showed that most errors occurred during the production of trill (57.7%) and dentoalveolar (92%) phonemes, consecutively. Conclusion. Patients with PD presented nonverbal and verbal apraxia and made several verbal apraxic errors. Verbal apraxia was correlated with education levels. PMID:26543663
NASA Astrophysics Data System (ADS)
Munahefi, D. N.; Waluya, S. B.; Rochmad
2018-03-01
The purpose of this research identified the effectiveness of Problem Based Learning (PBL) models based on Self Regulation Leaning (SRL) on the ability of mathematical creative thinking and analyzed the ability of mathematical creative thinking of high school students in solving mathematical problems. The population of this study was students of grade X SMA N 3 Klaten. The research method used in this research was sequential explanatory. Quantitative stages with simple random sampling technique, where two classes were selected randomly as experimental class was taught with the PBL model based on SRL and control class was taught with expository model. The selection of samples at the qualitative stage was non-probability sampling technique in which each selected 3 students were high, medium, and low academic levels. PBL model with SRL approach effectived to students’ mathematical creative thinking ability. The ability of mathematical creative thinking of low academic level students with PBL model approach of SRL were achieving the aspect of fluency and flexibility. Students of academic level were achieving fluency and flexibility aspects well. But the originality of students at the academic level was not yet well structured. Students of high academic level could reach the aspect of originality.
NASA Astrophysics Data System (ADS)
Sasco, Romain; Guillou, Hervé; Nomade, Sébastien; Scao, Vincent; Maury, René C.; Kissel, Catherine; Wandres, Camille
2017-07-01
Fifteen basanitic and tephritic flows from Bas-Vivarais, the youngest volcanic field in the French Massif Central together with the Chaîne des Puys, were dated by 40Ar/39Ar and 40K-40Ar on separated groundmass, and studied for paleomagnetism. An almost systematic discrepancy between the two types of ages is observed, the 40K-40Ar method providing ages up to 8.5 times the 40Ar/39Ar ones. Microscopic observations and geochemical analyses lead us to conclude that most of the K-Ar ages measured on Bas-Vivarais samples are in error due to extraneous argon originating from contamination by xenocrysts from disintegrated crustal and mantle xenoliths. However, 40Ar/39Ar experiments do not evidence any excess argon, suggesting two possibilities: 1, the extraneous argon contribution was eliminated during the pre-degassing of the samples at 600 °C prior to the step heating experiments, 2 - K-Ar ages may be older because larger quantities of xenocrysts, potential carriers of extraneous argon were involved in the K-Ar experiments than in the 40Ar/39Ar ones. 40Ar/39Ar ages are thus little or not affected by contamination and provide reliable ages for the studied volcanoes. Combined 40Ar/39Ar datings and magnetic directions for each flow point out to three successive stages in the volcanic evolution of Bas-Vivarais. Stage 1, limited to the northern part of the field, has a mean age of 187.3 ± 19.0 ka. In its southern part, Stages 2 and 3 emplaced magmas at 31.1 ± 3.9 ka and 23.9 ± 8.1 ka, respectively. These two last stages are consistent with available 14C dates but not with previous thermoluminescence data.
2012-01-01
Background Chronic allograft nephropathy (CAN) occurs in a large share of transplant recipients and it is the leading cause of graft loss despite the introduction of new and effective immunosuppressants. The reduction in renal function secondary to immunologic and non-immunologic CAN leads to several complications, including anemia and calcium-phosphorus metabolism imbalance and may be associated to worsening Health-Related Quality of Life. We sought to evaluate the relationship between kidney function and Euro-Qol 5 Dimension Index (EQ-5Dindex) scores after kidney transplantation and evaluate whether cross-cultural differences exist between UK and US. Methods This study is a secondary analysis of existing data gathered from two cross-sectional studies. We enrolled 233 and 209 subjects aged 18–74 years who received a kidney transplant in US and UK respectively. For the present analysis we excluded recipients with multiple or multi-organ transplantation, creatinine kinase ≥200 U/L, acute renal failure, and without creatinine assessments in 3 months pre-enrollment leaving 281 subjects overall. The questionnaires were administered independently in the two centers. Both packets included the EQ-5Dindex and socio-demographic items. We augmented the analytical dataset with information abstracted from clinical charts and administrative records including selected comorbidities and biochemistry test results. We used ordinary least squares and quantile regression adjusted for socio-demographic and clinical characteristics to assess the association between EQ-5Dindex and severity of chronic kidney disease (CKD). Results CKD severity was negatively associated with EQ-5Dindex in both samples (UK: ρ= −0.20, p=0.02; US: ρ= −0.21, p=0.02). The mean adjusted disutility associated to CKD stage 5 compared to CKD stage 1–2 was Δ= −0.38 in the UK sample, Δ= −0.11 in the US sample and Δ= −0.22 in the whole sample. The adjusted median disutility associated to CKD stage 5 compared to CKD stage 1–2 for the whole sample was 0.18 (p<0.01, quantile regression). Center effect was not statistically significant. Conclusions Impaired renal function is associated with reduced health-related quality of life independent of possible confounders, center-effect and analytic framework. PMID:23173709
Tripodi, Amber D; Szalanski, Allen L; Strange, James P
2018-03-01
Crithidia bombi and Crithidia expoeki (Trypanosomatidae) are common parasites of bumble bees (Bombus spp.). Crithidia bombi was described in the 1980s, and C. expoeki was recently discovered using molecular tools. Both species have cosmopolitan distributions among their bumble bee hosts, but there have been few bumble bee studies that have identified infections to species since the original description of C. expoeki in 2010. Morphological identification of species is difficult due to variability within each stage of their complex lifecycles, although they can be easily differentiated through DNA sequencing. However, DNA sequencing can be expensive, particularly with many samples to diagnose. In order to reliably and inexpensively distinguish Crithidia species for a large-scale survey, we developed a multiplex PCR protocol using species-specific primers with a universal trypanosomatid primer set to detect unexpected relatives. We applied this method to 356 trypanosomatid-positive bumble bees from North America as a first-look at the distribution and host range of each parasite in the region. Crithidia bombi was more common (90.2%) than C. expoeki (21.3%), with most C. expoeki-positive samples existing as co-infections with C. bombi (13.8%). This two-step detection method also revealed that 2.2% samples were positive for trypanosmatids that were neither C. bombi nor C. expoeki. Sequencing revealed that two individuals were positive for C. mellificae, one for Lotmaria passim, and three for two unclassified trypanosomatids. This two-step method is effective in diagnosing known bumble bee infecting Crithidia species, and allowing for the discovery of unknown potential symbionts. Published by Elsevier Inc.
Antai, Diddy; Adaji, Sunday
2012-11-14
Intimate partner violence (IPV) is a major public health problem with serious consequences for women's physical, mental, sexual and reproductive health. Reproductive health outcomes such as unwanted and terminated pregnancies, fetal loss or child loss during infancy, non-use of family planning methods, and high fertility are increasingly recognized. However, little is known about the role of community influences on women's experience of IPV and its effect on terminated pregnancy, given the increased awareness of IPV being a product of social context. This study sought to examine the role of community-level norms and characteristics in the association between IPV and terminated pregnancy in Nigeria. Multilevel logistic regression analyses were performed on nationally-representative cross-sectional data including 19,226 women aged 15-49 years in Nigeria. Data were collected by a stratified two-stage sampling technique, with 888 primary sampling units (PSUs) selected in the first sampling stage, and 7,864 households selected through probability sampling in the second sampling stage. Women who had experienced physical IPV, sexual IPV, and any IPV were more likely to have terminated a pregnancy compared to women who had not experienced these IPV types.IPV types were significantly associated with factors reflecting relationship control, relationship inequalities, and socio-demographic characteristics. Characteristics of the women aggregated at the community level (mean education, justifying wife beating, mean age at first marriage, and contraceptive use) were significantly associated with IPV types and terminated pregnancy. Findings indicate the role of community influence in the association between IPV-exposure and terminated pregnancy, and stress the need for screening women seeking abortions for a history of abuse.
Mohammed, Hlack; Roberts, Daryl L; Copley, Mark; Hammond, Mark; Nichols, Steven C; Mitchell, Jolyon P
2012-09-01
Current pharmacopeial methods for testing dry powder inhalers (DPIs) require that 4.0 L be drawn through the inhaler to quantify aerodynamic particle size distribution of "inhaled" particles. This volume comfortably exceeds the internal dead volume of the Andersen eight-stage cascade impactor (ACI) and Next Generation pharmaceutical Impactor (NGI) as designated multistage cascade impactors. Two DPIs, the second (DPI-B) having similar resistance than the first (DPI-A) were used to evaluate ACI and NGI performance at 60 L/min following the methodology described in the European and United States Pharmacopeias. At sampling times ≥2 s (equivalent to volumes ≥2.0 L), both impactors provided consistent measures of therapeutically important fine particle mass (FPM) from both DPIs, independent of sample duration. At shorter sample times, FPM decreased substantially with the NGI, indicative of incomplete aerosol bolus transfer through the system whose dead space was 2.025 L. However, the ACI provided consistent measures of both variables across the range of sampled volumes evaluated, even when this volume was less than 50% of its internal dead space of 1.155 L. Such behavior may be indicative of maldistribution of the flow profile from the relatively narrow exit of the induction port to the uppermost stage of the impactor at start-up. An explanation of the ACI anomalous behavior from first principles requires resolution of the rapidly changing unsteady flow and pressure conditions at start up, and is the subject of ongoing research by the European Pharmaceutical Aerosol Group. Meanwhile, these experimental findings are provided to advocate a prudent approach by retaining the current pharmacopeial methodology.
Costa, Renata G; Bah, Homegnon A F; Bandeira, Matheus J; Oliveira, Sérgio S P; Menezes-Filho, José A
2017-09-01
Lead (Pb) and cadmium (Cd) were determined in mangrove root crab (Goniopsis cruentata) tissues (in natura) and in two culinary preparations by graphite furnace atomic absorption spectrometry. Mangrove root crab samples from three sampling sites along the Jaguaripe River, Bahia, Brazil, where lead-glazed ceramics are produced, and from two commercial preparations were collected or purchased in March and April 2016. Cd levels in raw and processed samples were below the methods' limits of detection (0.016 mg kg -1 ), while Pb levels in the raw tissues were determined only in the gills (0.67 mg kg -1 ) and in the hepatopancreas (0.14 mg kg -1 ). However, Pb levels increased from 0.05 to 2.84 mg kg -1 in boiled/sorted muscle and in the traditional stew (with a 57-fold increase), respectively. Pb levels augmented significantly in the processed food due to migration of Pb used in the glazing of cooking ceramic utensils, surpassing the Brazilian and international safety limits.
2012-01-01
Background Prosthetic joint infection is an uncommon but serious complication of hip replacement. There are two main surgical treatment options, with the choice largely based on the preference of the surgeon. Evidence is required regarding the comparative effectiveness of one-stage and two-stage revision to prevent reinfection after prosthetic joint infection. Methods We conducted a systematic review to identify randomised controlled trials, systematic reviews and longitudinal studies in unselected patients with infection treated exclusively by one- or two-stage methods or by any method. The Embase, MEDLINE and Cochrane databases were searched up to March 2011. Reference lists were checked, and citations of key articles were identified by using the ISI Web of Science portal. Classification of studies and data extraction were performed independently by two reviewers. The outcome measure studied was reinfection within 2 years. Data were combined to produce pooled random-effects estimates using the Freeman-Tukey arc-sine transformation. Results We identified 62 relevant studies comprising 4,197 patients. Regardless of treatment, the overall rate of reinfection after any treatment was 10.1% (95% CI = 8.2 to 12.0). In 11 studies comprising 1,225 patients with infected hip prostheses who underwent exclusively one-stage revision, the rate of reinfection was 8.6% (95% CI = 4.5 to 13.9). After two-stage revision exclusively in 28 studies comprising 1,188 patients, the rate of reinfection was 10.2% (95% CI = 7.7 to 12.9). Conclusion Evidence of the relative effectiveness of one- and two-stage revision in preventing reinfection of hip prostheses is largely based on interpretation of longitudinal studies. There is no suggestion in the published studies that one- or two stage methods have different reinfection outcomes. Randomised trials are needed to establish optimum management strategies. PMID:22340795
Bertolde, F Z; Almeida, A-A F; Silva, F A C; Oliveira, T M; Pirovani, C P
2014-07-04
Theobroma cacao is a woody and recalcitrant plant with a very high level of interfering compounds. Standard protocols for protein extraction were proposed for various types of samples, but the presence of interfering compounds in many samples prevented the isolation of proteins suitable for two-dimensional gel electrophoresis (2-DE). An efficient method to extract root proteins for 2-DE was established to overcome these problems. The main features of this protocol are: i) precipitation with trichloroacetic acid/acetone overnight to prepare the acetone dry powder (ADP), ii) several additional steps of sonication in the ADP preparation and extractions with dense sodium dodecyl sulfate and phenol, and iii) adding two stages of phenol extractions. Proteins were extracted from roots using this new protocol (Method B) and a protocol described in the literature for T. cacao leaves and meristems (Method A). Using these methods, we obtained a protein yield of about 0.7 and 2.5 mg per 1.0 g lyophilized root, and a total of 60 and 400 spots could be separated, respectively. Through Method B, it was possible to isolate high-quality protein and a high yield of roots from T. cacao for high-quality 2-DE gels. To demonstrate the quality of the extracted proteins from roots of T. cacao using Method B, several protein spots were cut from the 2-DE gels, analyzed by tandem mass spectrometry, and identified. Method B was further tested on Citrus roots, with a protein yield of about 2.7 mg per 1.0 g lyophilized root and 800 detected spots.
NASA Astrophysics Data System (ADS)
Lukyashin, K. E.; Shitov, V. A.; Medvedev, A. I.; Ishchenko, A. V.; Shevelev, V. S.; Shulgin, B. V.; Basyrova, L. R.
2018-04-01
In this paper, we report on the dependence of the luminescent and the optical properties on the synthesis conditions of the transparent 0.1 at.% Ce:YAG and 1 at.% Ce:YAG ceramics. The ceramics were produced from the nanopowders with a diameter of about 10–15 nm by means of the laser method. The fundamental difference between the two described methods is in the synthesis of the main phase YAG: directly during the vacuum sintering (1 – the first method) and before the vacuum sintering (2 – the second method). For this purpose, the transparent samples (Ø10×2 mm) with the optical transmittance ranging from 58 to 82% at the wavelength of 600 nm were obtained. The first method was proven to be the most preferable in terms of the exact dosage of the dopant which gives the samples the best scintillation characteristics. In a point of fact atom of cerium can potentially leave the material at any or at a certain stage of the ceramics synthesis, reducing the total concentration of Ce3+ in YAG.
Fast image interpolation via random forests.
Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui
2015-10-01
This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.
Changes in resistant starch from two banana cultivars during postharvest storage.
Wang, Juan; Tang, Xue Juan; Chen, Ping Sheng; Huang, Hui Hua
2014-08-01
Banana resistant starch samples were extracted and isolated from two banana cultivars (Musa AAA group, Cavendish subgroup and Musa ABB group, Pisang Awak subgroup) at seven ripening stages during postharvest storage. The structures of the resistant starch samples were analysed by light microscopy, polarising microscopy, scanning electron microscopy, X-ray diffraction, and infrared spectroscopy. Physicochemical properties (e.g., water-holding capacity, solubility, swelling power, transparency, starch-iodine absorption spectrum, and Brabender microviscoamylograph profile) were determined. The results revealed significant differences in microstructure and physicochemical characteristics among the banana resistant starch samples during different ripening stages. The results of this study provide valuable information for the potential applications of banana resistant starches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Rouger, Vincent; Bordet, Guillaume; Couillault, Carole; Monneret, Serge; Mailfert, Sébastien; Ewbank, Jonathan J; Pujol, Nathalie; Marguet, Didier
2014-05-20
To investigate the early stages of cell-cell interactions occurring between living biological samples, imaging methods with appropriate spatiotemporal resolution are required. Among the techniques currently available, those based on optical trapping are promising. Methods to image trapped objects, however, in general suffer from a lack of three-dimensional resolution, due to technical constraints. Here, we have developed an original setup comprising two independent modules: holographic optical tweezers, which offer a versatile and precise way to move multiple objects simultaneously but independently, and a confocal microscope that provides fast three-dimensional image acquisition. The optical decoupling of these two modules through the same objective gives users the possibility to easily investigate very early steps in biological interactions. We illustrate the potential of this setup with an analysis of infection by the fungus Drechmeria coniospora of different developmental stages of Caenorhabditis elegans. This has allowed us to identify specific areas on the nematode's surface where fungal spores adhere preferentially. We also quantified this adhesion process for different mutant nematode strains, and thereby derive insights into the host factors that mediate fungal spore adhesion. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Helms, M W; Kemming, D; Pospisil, H; Vogt, U; Buerger, H; Korsching, E; Liedtke, C; Schlotter, C M; Wang, A; Chan, S Y; Brandt, B H
2008-09-02
Gains of chromosomes 7p and 8q are associated with poor prognosis among oestrogen receptor-positive (ER+) stage I/II breast cancer. To identify transcriptional changes associated with this breast cancer subtype, we applied suppression subtractive hybridisation method to analyse differentially expressed genes among six breast tumours with and without chromosomal 7p and 8q gains. Identified mRNAs were validated by real-time RT-PCR in tissue samples obtained from 186 patients with stage I/II breast cancer. Advanced statistical methods were applied to identify associations of mRNA expression with distant metastasis-free survival (DMFS). mRNA expression of the key enzyme of cholesterol biosynthesis, squalene epoxidase (SQLE, chromosomal location 8q24.1), was associated with ER+ 7p+/8q+ breast cancer. Distant metastasis-free survival in stage I/II breast cancer cases was significantly inversely related to SQLE mRNA in multivariate Cox analysis (P<0.001) in two independent patient cohorts of 160 patients each. The clinically favourable group associated with a low SQLE mRNA expression could be further divided by mRNA expression levels of the oestrogen-regulated zinc transporter LIV-1. The data strongly support that SQLE mRNA expression might indicate high-risk ER+ stage I/II breast cancers. Further studies on tumour tissue from standardised treated patients, for example with tamoxifen, may validate the role of SQLE as a novel diagnostic parameter for ER+ early stage breast cancers.
Fernández, Elena; Vidal, Lorena; Iniesta, Jesús; Metters, Jonathan P; Banks, Craig E; Canals, Antonio
2014-03-01
A novel method is reported, whereby screen-printed electrodes (SPELs) are combined with dispersive liquid-liquid microextraction. In-situ ionic liquid (IL) formation was used as an extractant phase in the microextraction technique and proved to be a simple, fast and inexpensive analytical method. This approach uses miniaturized systems both in sample preparation and in the detection stage, helping to develop environmentally friendly analytical methods and portable devices to enable rapid and onsite measurement. The microextraction method is based on a simple metathesis reaction, in which a water-immiscible IL (1-hexyl-3-methylimidazolium bis[(trifluoromethyl)sulfonyl]imide, [Hmim][NTf2]) is formed from a water-miscible IL (1-hexyl-3-methylimidazolium chloride, [Hmim][Cl]) and an ion-exchange reagent (lithium bis[(trifluoromethyl)sulfonyl]imide, LiNTf2) in sample solutions. The explosive 2,4,6-trinitrotoluene (TNT) was used as a model analyte to develop the method. The electrochemical behavior of TNT in [Hmim][NTf2] has been studied in SPELs. The extraction method was first optimized by use of a two-step multivariate optimization strategy, using Plackett-Burman and central composite designs. The method was then evaluated under optimum conditions and a good level of linearity was obtained, with a correlation coefficient of 0.9990. Limits of detection and quantification were 7 μg L(-1) and 9 μg L(-1), respectively. The repeatability of the proposed method was evaluated at two different spiking levels (20 and 50 μg L(-1)), and coefficients of variation of 7 % and 5 % (n = 5) were obtained. Tap water and industrial wastewater were selected as real-world water samples to assess the applicability of the method.
Offline signature verification using convolution Siamese network
NASA Astrophysics Data System (ADS)
Xing, Zi-Jian; Yin, Fei; Wu, Yi-Chao; Liu, Cheng-Lin
2018-04-01
This paper presents an offline signature verification approach using convolutional Siamese neural network. Unlike the existing methods which consider feature extraction and metric learning as two independent stages, we adopt a deepleaning based framework which combines the two stages together and can be trained end-to-end. The experimental results on two offline public databases (GPDSsynthetic and CEDAR) demonstrate the superiority of our method on the offline signature verification problem.
NASA Technical Reports Server (NTRS)
Kalton, G.
1983-01-01
A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.
NASA Astrophysics Data System (ADS)
Xiang, Li; Wang, Jingjuan; Zhang, Guijun; Rong, Lixin; Wu, Haozhong; Sun, Suqin; Guo, Yizhen; Yang, Yanfang; Lu, Lina; Qu, Lei
2016-11-01
Rhizoma Chuanxiong (CX) and Radix Angelica sinensis (DG) are very important Traditional Chinese Medicine (TCM) and usually used in clinic. They both are from the Umbelliferae family, and have almost similar chemical constituents with each other. It is complicated, time-consuming and laborious to discriminate them by using the chromatographic methods such as high performance liquid chromatography (HPLC) and gas chromatography (GC). Therefore, to find a fast, applicable and effective identification method for two herbs is urged in quality research of TCM. In this paper, by using a three-stage infrared spectroscopy (Fourier transform infrared spectroscopy (FT-IR), the second derivative infrared spectroscopy (SD-IR) and two-dimensional correlation infrared spectroscopy (2D-IR)), we analyzed and discriminated CX, DG and their different extracts (aqueous extract, alcoholic extract and petroleum ether extract). In FT-IR, all the CX and DG samples' spectra seemed similar, but they had their own unique macroscopic fingerprints to identify. Through comparing with the spectra of sucrose and the similarity calculation, we found the content of sucrose in DG raw materials was higher than in CX raw materials. The significant differences in alcoholic extract appeared that in CX alcoholic extract, the peaks at 1743 cm-1 was obviously stronger than the peak at same position in DG alcoholic extract. Besides in petroleum ether extract, we concluded CX contained much more ligustilide than DG by the similarity calculation. With the function of SD-IR, some tiny differences were amplified and overlapped peaks were also unfolded in FT-IR. In the range of 1100-1175 cm-1, there were six peaks in the SD-IR spectra of DG and the intensity, shape and location of those six peaks were similar to that of sucrose, while only two peaks could be observed in that of CX and those two peaks were totally different from sucrose in shape and relative intensity. This result was consistent with that of the FT-IR. Several undetected characteristic fingerprints in FT-IR and SD-IR spectra were further disclosed by 2D-IR spectra. In the range of 1120-1500 cm-1, the FT-IR spectra and the SD-IR spectra of aqueous extract of CX and DG were almost similar and hard to be discriminated, but the 2D-IR spectra were markedly different. These findings indicated that the three-stage infrared spectroscopy can identify not only the main compositions in these two medicinal materials and their different extracts, but also can compare the differences of categories and quantities of chemical constituents between the similar samples. In conclusion, the three-stage infrared spectroscopy could identify the two similar TCM (CX and DG) quickly and effectively.
Khan, Aftab A; Al-Kheraif, Abdulaziz A; Al-Shehri, Abdullah M; Säilynoja, Eija; Vallittu, Pekka K
2018-02-01
This laboratory study was aimed to characterize semi-interpenetrating polymer network (semi-IPN) of fiber-reinforced composite (FRC) prepregs that had been stored for up to two years before curing. Resin impregnated prepregs of everStick C&B (StickTech-GC, Turku, Finland) glass FRC were stored at 4°C for various lengths of time, i.e., two-weeks, 6-months and 2-years. Five samples from each time group were prepared with a light initiated free radical polymerization method, which were embedded to its long axis in self-curing acrylic. The nanoindentation readings on the top surface toward the core of the sample were made for five test groups, which were named as "stage 1-5". To evaluate the nanohardness and modulus of elasticity of the polymer matrix, a total of 4 slices (100µm each) were cut from stage 1 to stage 5. Differences in nanohardness values were evaluated with analysis of variance (ANOVA), and regression model was used to develop contributing effect of the material's different stages to the total variability in the nanomechanical properties. Additional chemical and thermal characterization of the polymer matrix structure of FRC was carried out. It was hypothesized that time of storage may have an influence on the semi-IPN polymer structure of the cured FRC. The two-way ANOVA test revealed that the storage time had no significant effect on the nanohardness of FRC (p = 0.374). However, a highly significant difference in nanohardness values was observed between the different stages of FRC (P<0.001). The regression coefficient suggests nanohardness increased on average by 0.039GPa for every storage group. The increased nanohardness values in the core region of 6-months and 2-years stored prepregs might be due to phase-segregation of components of semi-IPN structure of FRC prepregs before their use. This may have an influence to the surface bonding properties of the cured FRC. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mixture-based gatekeeping procedures in adaptive clinical trials.
Kordzakhia, George; Dmitrienko, Alex; Ishida, Eiji
2018-01-01
Clinical trials with data-driven decision rules often pursue multiple clinical objectives such as the evaluation of several endpoints or several doses of an experimental treatment. These complex analysis strategies give rise to "multivariate" multiplicity problems with several components or sources of multiplicity. A general framework for defining gatekeeping procedures in clinical trials with adaptive multistage designs is proposed in this paper. The mixture method is applied to build a gatekeeping procedure at each stage and inferences at each decision point (interim or final analysis) are performed using the combination function approach. An advantage of utilizing the mixture method is that it enables powerful gatekeeping procedures applicable to a broad class of settings with complex logical relationships among the hypotheses of interest. Further, the combination function approach supports flexible data-driven decisions such as a decision to increase the sample size or remove a treatment arm. The paper concludes with a clinical trial example that illustrates the methodology by applying it to develop an adaptive two-stage design with a mixture-based gatekeeping procedure.
Jeffrey P. Prestemon; Geoffrey H. Donovan
2008-01-01
Making input decisions under climate uncertainty often involves two-stage methods that use expensive and opaque transfer functions. This article describes an alternative, single-stage approach to such decisions using forecasting methods. The example shown is for preseason fire suppression resource contracting decisions faced by the United States Forest Service. Two-...
Toxicity of airborne dust as an indicator of moisture problems in school buildings.
Tirkkonen, Jenni; Täubel, Martin; Leppänen, Hanna; Peltonen, Matti; Lindsley, William; Chen, Bean T; Hyvärinen, Anne; Hirvonen, Maija-Riitta; Huttunen, Kati
2017-02-01
Moisture-damaged indoor environments are thought to increase the toxicity of indoor air particulate matter (PM), indicating that a toxicological assay could be used as a method for recognizing buildings with indoor air problems. We aimed to test if our approach of analyzing the toxicity of actively collected indoor air PM in vitro differentiates moisture-damaged from non-damaged school buildings. We collected active air samples with NIOSH Bioaerosol Cyclone Samplers from moisture-damaged (index) and non-damaged (reference) school buildings (4 + 4). The teachers and pupils of the schools were administered a symptom questionnaire. Five samples of two size fractions [Stage 1 (>1.9 μm) and Stage 2 (1-1.9 μm)] were collected from each school. Mouse RAW264.7 macrophages were exposed to the collected PM for 24 h and subsequently analyzed for changes in cell metabolic activity, production of nitric oxide (NO), tumor necrosis factor (TNF)-α and interleukin (IL)-6. The teachers working in the moisture-damaged schools reported respiratory symptoms such as cough (p = 0.01) and shortness of breath (p = 0.01) more often than teachers from reference schools. Toxicity of the PM sample as such did not differentiate index from reference building,s but the toxicity adjusted for the amount of the particles tended to be higher in moisture-damaged schools. Further development of the method will require identification of other confounding factors in addition to the necessity to adjust for differences in particle counts between samples.
Heggendorn, Fabiano Luiz; Gonçalves, Lucio Souza; Dias, Eliane Pedra; de Oliveira Freitas Lione, Viviane; Lutterbach, Márcia Teresa Soares
2015-08-01
This study assessed the biocorrosive capacity of two bacteria: Desulfovibrio desulfuricans and Desulfovibrio fairfieldensis on endodontic files, as a preliminary step in the development of a biopharmaceutical, to facilitate the removal of endodontic file fragments from root canals. In the first stage, the corrosive potential of the artificial saliva medium (ASM), modified Postgate E medium (MPEM), 2.5 % sodium hypochlorite (NaOCl) solution and white medium (WM), without the inoculation of bacteria was assessed by immersion assays. In the second stage, test samples were inoculated with the two species of sulphur-reducing bacteria (SRB) on ASM and modified artificial saliva medium (MASM). In the third stage, test samples were inoculated with the same species on MPEM, ASM and MASM. All test samples were viewed under an infinite focus Alicona microscope. No test sample became corroded when immersed only in media, without bacteria. With the exception of one test sample between those inoculated with bacteria in ASM and MASM, there was no evidence of corrosion. Fifty percent of the test samples demonstrated a greater intensity of biocorrosion when compared with the initial assays. Desulfovibrio desulfuricans and D. fairfieldensis are capable of promoting biocorrosion of the steel constituent of endodontic files. This study describes the initial development of a biopharmaceutical to facilitate the removal of endodontic file fragments from root canals, which can be successfully implicated in endodontic therapy in order to avoiding parendodontic surgery or even tooth loss in such events.
Chowdari, K V; Northup, A; Pless, L; Wood, J; Joo, Y H; Mirnics, K; Lewis, D A; Levitt, P R; Bacanu, S-A; Nimgaonkar, V L
2007-04-01
Many candidate gene association studies have evaluated incomplete, unrepresentative sets of single nucleotide polymorphisms (SNPs), producing non-significant results that are difficult to interpret. Using a rapid, efficient strategy designed to investigate all common SNPs, we tested associations between schizophrenia and two positional candidate genes: ACSL6 (Acyl-Coenzyme A synthetase long-chain family member 6) and SIRT5 (silent mating type information regulation 2 homologue 5). We initially evaluated the utility of DNA sequencing traces to estimate SNP allele frequencies in pooled DNA samples. The mean variances for the DNA sequencing estimates were acceptable and were comparable to other published methods (mean variance: 0.0008, range 0-0.0119). Using pooled DNA samples from cases with schizophrenia/schizoaffective disorder (Diagnostic and Statistical Manual of Mental Disorders edition IV criteria) and controls (n=200, each group), we next sequenced all exons, introns and flanking upstream/downstream sequences for ACSL6 and SIRT5. Among 69 identified SNPs, case-control allele frequency comparisons revealed nine suggestive associations (P<0.2). Each of these SNPs was next genotyped in the individual samples composing the pools. A suggestive association with rs 11743803 at ACSL6 remained (allele-wise P=0.02), with diminished evidence in an extended sample (448 cases, 554 controls, P=0.062). In conclusion, we propose a multi-stage method for comprehensive, rapid, efficient and economical genetic association analysis that enables simultaneous SNP detection and allele frequency estimation in large samples. This strategy may be particularly useful for research groups lacking access to high throughput genotyping facilities. Our analyses did not yield convincing evidence for associations of schizophrenia with ACSL6 or SIRT5.
Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo
2013-01-01
OBJECTIVES Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. METHODS From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). RESULTS The mean postoperative follow-up period was 12.5 (range: 1–24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. CONCLUSIONS Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis. PMID:23442937
EPA's methods for analyzing PFAS in environmental media are in various stages of development. This fact sheet summarizes EPA's analytical methods development for groundwater, surface water, wastewater, and solids, including soils, sediments, and biosolids
May one-stage exchange for Candida albicans peri-prosthetic infection be successful?
Jenny, J-Y; Goukodadja, O; Boeri, C; Gaudias, J
2016-02-01
Fungal infection of a total joint arthroplasty has a low incidence but is generally considered as more difficult to cure than bacterial infection. As for bacterial infection, two-stage exchange is considered as the gold standard of treatment. We report two cases of one-stage total joint exchange for fungal peri-prosthetic infection with Candida albicans, where the responsible pathogens was only identified on intraoperative samples. This situation can be considered as a one-stage exchange for fungal peri-prosthetic infection without preoperative identification of the responsible organism, which is considered as having a poor prognosis. Both cases were free of infection after two years. One-stage revision has several potential advantages over two-stage revision, including shorter hospital stay and rehabilitation, no interim period with significant functional impairment, shorter antibiotic treatment, better functional outcome and probably lower costs. We suggest that one-stage revision for C. albicans peri-prosthetic infection may be successful even without preoperative fungal identification. Level IV-Historical cases. Copyright © 2015 Elsevier Masson SAS. All rights reserved.
Characterization of Infrastructure Materials using Nonlinear Ultrasonics
NASA Astrophysics Data System (ADS)
Liu, Minghe
In order to improve the safety, reliability, cost, and performance of civil and mechanical structures/components, it is necessary to develop techniques that are capable of characterizing and quantifying the amount of distributed damage in engineering materials before any detectable discontinuities (cracks, delaminations, voids, etc.) appear. In this dissertation, novel nonlinear ultrasonic NDE methods are developed and applied to characterize cumulative damage such as fatigue damage in metallic materials and degradation of cement-based materials due to chemical reactions. First, nonlinear Rayleigh surface waves are used to measure the near-surface residual stresses in shot-peened aluminum alloy (AA 7075) samples. Results show that the nonlinear Rayleigh wave is very sensitive to near-surface residual stresses, and has the potential to quantitatively detect them. Second, a novel two-wave mixing method is theoretically developed and numerically verified. This method is then successfully applied to detect the fatigue damage in aluminum alloy (AA 6061) samples subjected to monotonic compression. In addition to its high sensitivity to fatigue damage, this collinear wave mixing method allows the measurement over a specific region of interest in the specimen, and this capability makes it possible to obtain spatial distribution of fatigue damage through the thickness direction of the sample by simply timing the transducers. Third, the nonlinear wave mixing method is used to characterize the degradation of cement-based materials caused by alkali-silica reaction (ASR). It is found that the nonlinear ultrasonic method is sensitive to detect ASR damage at very early stage, and has the potential to identify the different damage stages. Finally, a micromechanics-based chemo-mechanical model is developed which relates the acoustic nonlinearity parameter to ASR damage. This model provides a way to quantitatively predict the changes in the acoustic nonlinearity parameter due to ASR damage, which can be used to guide experimental measurements for nondestructive evaluation of ASR damage.
Min, J W; Moon, A; Lubben, J E
2005-05-01
The purpose of this study is to examine racial/ethnic differences in the change of psychological distress as measured by CES-D over time and its associated factors between older Korean immigrants and non-Hispanic White elders, based on a social stress perspective. Data come from a two-wave panel survey of 172 older Korean immigrants and 157 non-Hispanic White elders, with a follow-up period of 12 to 15 months. The sample was drawn from a three-stage probability sampling method. Ordinary least square regressions in a hierarchical process and change score method were used to analyze the two-wave panel data. Older Korean immigrants reported higher levels of psychological distress than the non-Hispanic White elderly at both Time 1 and Time 2. Changes in self-assessed health status and functional limitations were significantly associated with change in psychological distress for both ethnic groups. Increased social support significantly decreased psychological distress at Time 2, for older Korean immigrants only. This study discusses practice and policy implications for service and interventions for older immigrants to assist their adjustment to a host society.
Manzanero, Silvia; Kozlovskaia, Maria; Vlahovich, Nicole
2018-01-01
Background With the increasing capacity for remote collection of both data and samples for medical research, a thorough assessment is needed to determine the association of population characteristics and recruitment methodologies with response rates. Objective The aim of this research was to assess population representativeness in a two-stage study of health and injury in recreational runners, which consisted of an epidemiological arm and genetic analysis. Methods The cost and success of various classical and internet-based methods were analyzed, and demographic representativeness was assessed for recruitment to the epidemiological survey, reported willingness to participate in the genetic arm of the study, actual participation, sample return, and approval for biobank storage. Results A total of 4965 valid responses were received, of which 1664 were deemed eligible for genetic analysis. Younger age showed a negative association with initial recruitment rate, expressed willingness to participate in genetic analysis, and actual participation. Additionally, female sex was associated with higher initial recruitment rates, and ethnic origin impacted willingness to participate in the genetic analysis (all P<.001). Conclusions The sharp decline in retention through the different stages of the study in young respondents suggests the necessity to develop specific recruitment and retention strategies when investigating a young, physically active population. PMID:29792293
Multimodal manifold-regularized transfer learning for MCI conversion prediction.
Cheng, Bo; Liu, Mingxia; Suk, Heung-Il; Shen, Dinggang; Zhang, Daoqiang
2015-12-01
As the early stage of Alzheimer's disease (AD), mild cognitive impairment (MCI) has high chance to convert to AD. Effective prediction of such conversion from MCI to AD is of great importance for early diagnosis of AD and also for evaluating AD risk pre-symptomatically. Unlike most previous methods that used only the samples from a target domain to train a classifier, in this paper, we propose a novel multimodal manifold-regularized transfer learning (M2TL) method that jointly utilizes samples from another domain (e.g., AD vs. normal controls (NC)) as well as unlabeled samples to boost the performance of the MCI conversion prediction. Specifically, the proposed M2TL method includes two key components. The first one is a kernel-based maximum mean discrepancy criterion, which helps eliminate the potential negative effect induced by the distributional difference between the auxiliary domain (i.e., AD and NC) and the target domain (i.e., MCI converters (MCI-C) and MCI non-converters (MCI-NC)). The second one is a semi-supervised multimodal manifold-regularized least squares classification method, where the target-domain samples, the auxiliary-domain samples, and the unlabeled samples can be jointly used for training our classifier. Furthermore, with the integration of a group sparsity constraint into our objective function, the proposed M2TL has a capability of selecting the informative samples to build a robust classifier. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database validate the effectiveness of the proposed method by significantly improving the classification accuracy of 80.1 % for MCI conversion prediction, and also outperforming the state-of-the-art methods.
[TRAUMATIC INTRAORGANIC HEPATIC AND SPLENIC HEMATOMAS].
Timerbulatov, V M; Khalikov, A A; Timerbulatov, Sh V; Verzakova, I V; Amirova, A M; Smyr, R A
2015-01-01
An analysis of application results of complex research methods of diagnostics of intraorganic hepatic and splenic hematomas was made. At the same time, options of these methods were used for determination of prescription of injury. The ultrasound, CT, MR-imaging, videolaparoscopy, angiography, Doppler ultrasonics, impedometry, biochemical, laboratory and cytological study of punctate sample from hematomas were applied for this purpose in 33 patients. According to authors, an evolution of hematomas happened in 3 stages, each of this stage was characterized by specified data associated with investigation results. The staging procedure of hematomas or their evolution allowed setting the prescription of injury.
NASA Astrophysics Data System (ADS)
Bhatt, Manish; Montagnon, Emmanuel; Destrempes, François; Chayer, Boris; Kazemirad, Siavash; Cloutier, Guy
2018-03-01
Deep vein thrombosis is a common vascular disease that can lead to pulmonary embolism and death. The early diagnosis and clot age staging are important parameters for reliable therapy planning. This article presents an acoustic radiation force induced resonance elastography method for the viscoelastic characterization of clotting blood. The physical concept of this method relies on the mechanical resonance of the blood clot occurring at specific frequencies. Resonances are induced by focusing ultrasound beams inside the sample under investigation. Coupled to an analytical model of wave scattering, the ability of the proposed method to characterize the viscoelasticity of a mimicked venous thrombosis in the acute phase is demonstrated. Experiments with a gelatin-agar inclusion sample of known viscoelasticity are performed for validation and establishment of the proof of concept. In addition, an inversion method is applied in vitro for the kinetic monitoring of the blood coagulation process of six human blood samples obtained from two volunteers. The computed elasticity and viscosity values of blood samples at the end of the 90 min kinetics were estimated at 411 ± 71 Pa and 0.25 ± 0.03 Pa · s for volunteer #1, and 387 ± 35 Pa and 0.23 ± 0.02 Pa · s for volunteer #2, respectively. The proposed method allowed reproducible time-varying thrombus viscoelastic measurements from samples having physiological dimensions.
Chriskos, Panteleimon; Frantzidis, Christos A; Gkivogkli, Polyxeni T; Bamidis, Panagiotis D; Kourtidou-Papadeli, Chrysoula
2018-01-01
Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the "ENVIHAB" facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging.
Chriskos, Panteleimon; Frantzidis, Christos A.; Gkivogkli, Polyxeni T.; Bamidis, Panagiotis D.; Kourtidou-Papadeli, Chrysoula
2018-01-01
Sleep staging, the process of assigning labels to epochs of sleep, depending on the stage of sleep they belong, is an arduous, time consuming and error prone process as the initial recordings are quite often polluted by noise from different sources. To properly analyze such data and extract clinical knowledge, noise components must be removed or alleviated. In this paper a pre-processing and subsequent sleep staging pipeline for the sleep analysis of electroencephalographic signals is described. Two novel methods of functional connectivity estimation (Synchronization Likelihood/SL and Relative Wavelet Entropy/RWE) are comparatively investigated for automatic sleep staging through manually pre-processed electroencephalographic recordings. A multi-step process that renders signals suitable for further analysis is initially described. Then, two methods that rely on extracting synchronization features from electroencephalographic recordings to achieve computerized sleep staging are proposed, based on bivariate features which provide a functional overview of the brain network, contrary to most proposed methods that rely on extracting univariate time and frequency features. Annotation of sleep epochs is achieved through the presented feature extraction methods by training classifiers, which are in turn able to accurately classify new epochs. Analysis of data from sleep experiments on a randomized, controlled bed-rest study, which was organized by the European Space Agency and was conducted in the “ENVIHAB” facility of the Institute of Aerospace Medicine at the German Aerospace Center (DLR) in Cologne, Germany attains high accuracy rates, over 90% based on ground truth that resulted from manual sleep staging by two experienced sleep experts. Therefore, it can be concluded that the above feature extraction methods are suitable for semi-automatic sleep staging. PMID:29628883
A comment on the PCAST report: Skip the "match"/"non-match" stage.
Morrison, Geoffrey Stewart; Kaye, David H; Balding, David J; Taylor, Duncan; Dawid, Philip; Aitken, Colin G G; Gittelson, Simone; Zadora, Grzegorz; Robertson, Bernard; Willis, Sheila; Pope, Susan; Neil, Martin; Martire, Kristy A; Hepler, Amanda; Gill, Richard D; Jamieson, Allan; de Zoete, Jacob; Ostrum, R Brent; Caliebe, Amke
2017-03-01
This letter comments on the report "Forensic science in criminal courts: Ensuring scientific validity of feature-comparison methods" recently released by the President's Council of Advisors on Science and Technology (PCAST). The report advocates a procedure for evaluation of forensic evidence that is a two-stage procedure in which the first stage is "match"/"non-match" and the second stage is empirical assessment of sensitivity (correct acceptance) and false alarm (false acceptance) rates. Almost always, quantitative data from feature-comparison methods are continuously-valued and have within-source variability. We explain why a two-stage procedure is not appropriate for this type of data, and recommend use of statistical procedures which are appropriate. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Hero, Nikša; Vengust, Rok; Topolovec, Matevž
2017-01-01
Study Design. A retrospective, one center, institutional review board approved study. Objective. Two methods of operative treatments were compared in order to evaluate whether a two-stage approach is justified for correction of bigger idiopathic scoliosis curves. Two stage surgery, combined anterior approach in first operation and posterior instrumentation and correction in the second operation. One stage surgery included only posterior instrumentation and correction. Summary of Background Data. Studies comparing two-stage approach and only posterior approach are rather scarce, with shorter follow up and lack of clinical data. Methods. Three hundred forty eight patients with idiopathic scoliosis were operated using Cotrel–Dubousset (CD) hybrid instrumentation with pedicle screw and hooks. Only patients with curvatures more than or equal to 61° were analyzed and divided in two groups: two stage surgery (N = 30) and one stage surgery (N = 46). The radiographic parameters as well as duration of operation, hospitalization time, and number of segments included in fusion and clinical outcome were analyzed. Results. No statistically significant difference was observed in correction between two-stage group (average correction 69%) and only posterior approach group (average correction 66%). However, there were statistically significant differences regarding hospitalization time, duration of the surgery, and the number of instrumented segments. Conclusion. Two-stage surgery has only a limited advantage in terms of postoperative correction angle compared with the posterior approach. Posterior instrumentation and correction is satisfactory, especially taking into account that the patient is subjected to only one surgery. Level of Evidence: 3 PMID:28125525
Fears of institutionalized mentally retarded adults.
Sternlicht, M
1979-01-01
The patterns of fears of institutionalized mentally retarded adults were studied in a sample of i2 moderately retarded men and women between the ages of 21-49. The direct questioning method was employed. Two interviews were held, two weeks apart; the first interview elicited the Ss' fears, while the second concerned the fears of their friends. A total of 146 responses were obtained, and these were categorized according to the types of fears: supernatural-natural events, animals, physical injury, psychological stress, egocentric responses, and no fears. The Ss displayed a higher percentage of fears in the preoperational stage than in the concrete operational stage. In a comparison of male to female fears, only one category, that of fears of animals, reached significance. The study suggested that the same developmental trend of fears that appears in normal children appears in the retarded as well, and these fears follow Piaget's level of cognitive development, proceeding from egocentric perceptions of causality to realistic cause and effect thinking.
Magnetic gauge instrumentation on the LANL gas-driven two-stage gun
NASA Astrophysics Data System (ADS)
Alcon, R. R.; Sheffield, S. A.; Martinez, A. R.; Gustavsen, R. L.
1998-07-01
The LANL gas-driven two-stage gun was designed and built to do initiation studies on insensitive high explosives as well as equation of state and reaction experiments on other materials. The preferred method of measuring reaction phenomena involves the use of in-situ magnetic particle velocity gauges. In order to accommodate this type of gauging in our two-stage gun, it has a 50-mm-diameter launch tube. We have used magnetic gauging on our 72-mm bore diameter single-stage gun for over 15 years and it has proven a very effective technique for all types of shock wave experiments, including those on high explosives. This technique has now been installed on our gas-driven two-stage gun. We describe the method used, as well as some of the difficulties that arose during the installation. Several magnetic gauge experiments have been completed on plastic materials. Waveforms obtained in some of the experiments will be discussed. Up to 10 in-situ particle velocity measurements can be made in a single experiment. This new technique is now working quite well, as is evidenced by the data. To our knowledge, this is the first time magnetic gauging has been used on a two-stage gun.
Diamond heteroepitaxial lateral overgrowth
Tang, Y. -H.; Bi, B.; Golding, B.
2015-02-24
A method of diamond heteroepitaxial lateral overgrowth is demonstrated which utilizes a photolithographic metal mask to pattern a thin (001) epitaxial diamond surface. Significant structural improvement was found, with a threading dislocation density reduced by two orders of magnitude at the top surface of a thick overgrown diamond layer. In the initial stage of overgrowth, a reduction of diamond Raman linewidth in the overgrown area was also realized. Thermally-induced stress and internal stress were determined by Raman spectroscopy of adhering and delaminated diamond films. As a result, the internal stress is found to decrease as sample thickness increases.
A note on sample size calculation for mean comparisons based on noncentral t-statistics.
Chow, Shein-Chung; Shao, Jun; Wang, Hansheng
2002-11-01
One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.
Figueiredo, Viviane Rossi; Cardoso, Paulo Francisco Guerreiro; Jacomelli, Márcia; Demarzo, Sérgio Eduardo; Palomino, Addy Lidvina Mejia; Rodrigues, Ascédio José; Terra, Ricardo Mingarini; Pego-Fernandes, Paulo Manoel; Carvalho, Carlos Roberto Ribeiro
2015-01-01
Endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) is a minimally invasive, safe and accurate method for collecting samples from mediastinal and hilar lymph nodes. This study focused on the initial results obtained with EBUS-TBNA for lung cancer and lymph node staging at three teaching hospitals in Brazil. This was a retrospective analysis of patients diagnosed with lung cancer and submitted to EBUS-TBNA for mediastinal lymph node staging. The EBUS-TBNA procedures, which involved the use of an EBUS scope, an ultrasound processor, and a compatible, disposable 22 G needle, were performed while the patients were under general anesthesia. Between January of 2011 and January of 2014, 149 patients underwent EBUS-TBNA for lymph node staging. The mean age was 66 ± 12 years, and 58% were male. A total of 407 lymph nodes were sampled by EBUS-TBNA. The most common types of lung neoplasm were adenocarcinoma (in 67%) and squamous cell carcinoma (in 24%). For lung cancer staging, EBUS-TBNA was found to have a sensitivity of 96%, a specificity of 100%, and a negative predictive value of 85%. We found EBUS-TBNA to be a safe and accurate method for lymph node staging in lung cancer patients.
Mastrangelo, F; Sberna, M T; Tettamanti, L; Cantatore, G; Tagliabue, A; Gherlone, E
2016-01-01
Vascular Endothelia Growth Factor (VEGF) and Nitric Oxide Synthase (NOS) expression, were evaluated in human tooth germs at two different stages of embryogenesis, to clarify the role of angiogenesis during tooth tissue differentiation and growth. Seventy-two third molar germ specimens were selected during oral surgery. Thirty-six were in the early stage and 36 in the later stage of tooth development. The samples were evaluated with Semi-quantitative Reverse Transcription-Polymerase chain Reaction analyses (RT-PcR), Western blot analysis (WB) and immunohistochemical analysis. Western blot and immunohistochemical analysis showed a VEGF and NOS 1-2-3 positive reaction in all samples analysed. VEGF high positive decrease reaction was observed in stellate reticulum cells, ameloblast and odontoblast clusters in early stage compared to later stage of tooth germ development. Comparable VEGF expression was observed in endothelial cells of early and advanced stage growth. NOS1 and NOS3 expressions showed a high increased value in stellate reticulum cells, and ameloblast and odontoblast clusters in advanced stage compared to early stage of development. The absence or only moderate positive reaction of NOS2 was detected in all the different tissues. Positive NOS2 expression showed in advanced stage of tissue development compared to early stage. The action of VEGF and NOS molecules are important mediators of angiogenesis during dental tissue development. VEGF high positive expression in stellate reticulum cells in the early stage of tooth development compared to the later stage and the other cell types, suggests a critical role of the stellate reticulum during dental embryo-morphogenesis.
Scanning electron microscopy of high-pressure-frozen sea urchin embryos.
Walther, P; Chen, Y; Malecki, M; Zoran, S L; Schatten, G P; Pawley, J B
1993-12-01
High-pressure-freezing permits direct cryo-fixation of sea urchin embryos having a defined developmental state without the formation of large ice crystals. We have investigated preparation protocols for observing high-pressure-frozen and freeze-fractured samples in the scanning electron microscope. High-pressure-freezing was superior to other freezing protocols, because the whole bulk sample was reasonably well frozen and the overall three-dimensional shape of the embryos was well preserved. The samples were either dehydrated by freeze-substitution and critical-point-drying, or imaged in the partially hydrated state, using a cold stage in the SEM. During freeze-substitution the samples were stabilized by fixatives. The disadvantage of this method was that shrinking and extraction effects, caused by the removal of the water, could not be avoided. These disadvantages were avoided when the sample was imaged in the frozen-hydrated state using a cold-stage in the SEM. This would be the method of choice for morphometric studies. Frozen-hydrated samples, however, were very beam sensitive and many structures remained covered by the ice and were not visible. Frozen-hydrated samples were partially freeze-dried to make visible additional structures that had been covered by ice. However, this method also caused drying artifacts when too much water was removed.
Dynamics of faecal egg count in natural infection of Haemonchus spp. in Indian goats
Agrawal, Nimisha; Sharma, Dinesh Kumar; Mandal, Ajoy; Rout, Pramod Kumar; Kushwah, Yogendra Kumar
2015-01-01
Aim: Dynamics of faecal egg count (FEC) in Haemonchus spp. infected goats of two Indian goat breeds, Jamunapari and Sirohi, in natural conditions was studied and effects of genetic and non-genetic factors were determined. Materials and Methods: A total of 1399 faecal samples of goats of Jamunapari and Sirohi breeds, maintained at CIRG, Makhdoom, Mathura, India and naturally infected with Haemonchus spp., were processed and FEC was performed. Raw data generated on FEC were transformed by loge (FEC+100) and transformed data (least squares mean of FEC [LFEC]) were analyzed using a mixed model least squares analysis for fitting constant. Fixed effects such as breed, physiological status, season and year of sampling and breed × physiological states interaction were used. Result: The incidence of Haemomchus spp. infection in Jamunapari and Sirohi does was 63.01 and 47.06%, respectively. The mean LFEC of both Jamunapari and Sirohi (does) at different physiological stages, namely dry, early pregnant, late pregnant early lactating and late lactating stages were compared. Breed, season and year of sampling had a significant effect on FEC in Haemomchus spp. infection. Effect of breed × physiological interaction was also significant. The late pregnant does of both breeds had higher FEC when compared to does in other stages. Conclusion: Breed difference in FEC was more pronounced at the time of post kidding (early lactation) when sharp change in FEC was observed. PMID:27046993
LP and NLP decomposition without a master problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuller, D.; Lan, B.
We describe a new algorithm for decomposition of linear programs and a class of convex nonlinear programs, together with theoretical properties and some test results. Its most striking feature is the absence of a master problem; the subproblems pass primal and dual proposals directly to one another. The algorithm is defined for multi-stage LPs or NLPs, in which the constraints link the current stage`s variables to earlier stages` variables. This problem class is general enough to include many problem structures that do not immediately suggest stages, such as block diagonal problems. The basic algorithmis derived for two-stage problems and extendedmore » to more than two stages through nested decomposition. The main theoretical result assures convergence, to within any preset tolerance of the optimal value, in a finite number of iterations. This asymptotic convergence result contrasts with the results of limited tests on LPs, in which the optimal solution is apparently found exactly, i.e., to machine accuracy, in a small number of iterations. The tests further suggest that for LPs, the new algorithm is faster than the simplex method applied to the whole problem, as long as the stages are linked loosely; that the speedup over the simpex method improves as the number of stages increases; and that the algorithm is more reliable than nested Dantzig-Wolfe or Benders` methods in its improvement over the simplex method.« less
NASA Astrophysics Data System (ADS)
Gao, Yan; Marpu, Prashanth; Morales Manila, Luis M.
2014-11-01
This paper assesses the suitability of 8-band Worldview-2 (WV2) satellite data and object-based random forest algorithm for the classification of avocado growth stages in Mexico. We tested both pixel-based with minimum distance (MD) and maximum likelihood (MLC) and object-based with Random Forest (RF) algorithm for this task. Training samples and verification data were selected by visual interpreting the WV2 images for seven thematic classes: fully grown, middle stage, and early stage of avocado crops, bare land, two types of natural forests, and water body. To examine the contribution of the four new spectral bands of WV2 sensor, all the tested classifications were carried out with and without the four new spectral bands. Classification accuracy assessment results show that object-based classification with RF algorithm obtained higher overall higher accuracy (93.06%) than pixel-based MD (69.37%) and MLC (64.03%) method. For both pixel-based and object-based methods, the classifications with the four new spectral bands (overall accuracy obtained higher accuracy than those without: overall accuracy of object-based RF classification with vs without: 93.06% vs 83.59%, pixel-based MD: 69.37% vs 67.2%, pixel-based MLC: 64.03% vs 36.05%, suggesting that the four new spectral bands in WV2 sensor contributed to the increase of the classification accuracy.
Kitagawa, Taiji; Kohara, Hiroshi; Sohmura, Taiji; Takahashi, Junzo; Tachimura, Takashi; Wada, Takeshi; Kogo, Mikihiko
2004-09-01
This study examined dentoalveolar growth changes prior to the time of palatoplasty up to 3 years of age by the early two-stage Furlow and push-back methods. Thirty-four Japanese patients with complete unilateral cleft lip and palate (UCLP) treated with either a two-stage Furlow procedure (Furlow group: seven boys, eight girls) from 1998 to 2002 or a push-back procedure (push-back group; 12 boys, 7 girls) from 1993 to 1997. Consecutive plaster models were measured by three-dimensional laser scanner, before primary palatoplasty, before hard palate closure (Furlow group only), and at 3 years of age. Bite measures were taken at 3 years of age. In the Furlow group, arch length, canine width, first and second deciduous molar width and cross-sectional area, and depth and volume at midpoint showed greater growth than in the push-back group. In the Furlow group, the crossbite score was also better than in the push-back group at 3 years of age. In comparison with the push-back group, inhibition of growth impediment in the anterior region was observed in the horizontal direction in the Furlow group. In the midregion, it was observed in the horizontal and vertical directions, and in the posterior region it was observed in the horizontal direction. The results demonstrate that the early two-stage Furlow method showed progressive alveolar growth. Therefore, the early two-stage Furlow method is a more beneficial procedure than the push-back method.
[On the partition of acupuncture academic schools].
Yang, Pengyan; Luo, Xi; Xia, Youbing
2016-05-01
Nowadays extensive attention has been paid on the research of acupuncture academic schools, however, a widely accepted method of partition of acupuncture academic schools is still in need. In this paper, the methods of partition of acupuncture academic schools in the history have been arranged, and three typical methods of"partition of five schools" "partition of eighteen schools" and "two-stage based partition" are summarized. After adeep analysis on the disadvantages and advantages of these three methods, a new method of partition of acupuncture academic schools that is called "three-stage based partition" is proposed. In this method, after the overall acupuncture academic schools are divided into an ancient stage, a modern stage and a contemporary stage, each schoolis divided into its sub-school category. It is believed that this method of partition can remedy the weaknesses ofcurrent methods, but also explore a new model of inheritance and development under a different aspect through thedifferentiation and interaction of acupuncture academic schools at three stages.
Chao, Shiou-Huei; Huang, Hui-Yu; Chang, Chuan-Hsiung; Yang, Chih-Hsien; Cheng, Wei-Shen; Kang, Ya-Huei; Watanabe, Koichi; Tsai, Ying-Chieh
2013-01-01
In Taiwanese alternative medicine Lu-doh-huang (also called Pracparatum mungo), mung beans are mixed with various herbal medicines and undergo a 4-stage process of anaerobic fermentation. Here we used high-throughput sequencing of the 16S rRNA gene to profile the bacterial community structure of Lu-doh-huang samples. Pyrosequencing of samples obtained at 7 points during fermentation revealed 9 phyla, 264 genera, and 586 species of bacteria. While mung beans were inside bamboo sections (stages 1 and 2 of the fermentation process), family Lactobacillaceae and genus Lactobacillus emerged in highest abundance; Lactobacillus plantarum was broadly distributed among these samples. During stage 3, the bacterial distribution shifted to family Porphyromonadaceae, and Butyricimonas virosa became the predominant microbial component. Thereafter, bacterial counts decreased dramatically, and organisms were too few to be detected during stage 4. In addition, the microbial compositions of the liquids used for soaking bamboo sections were dramatically different: Exiguobacterium mexicanum predominated in the fermented soybean solution whereas B. virosa was predominant in running spring water. Furthermore, our results from pyrosequencing paralleled those we obtained by using the traditional culture method, which targets lactic acid bacteria. In conclusion, the microbial communities during Lu-doh-huang fermentation were markedly diverse, and pyrosequencing revealed a complete picture of the microbial consortium. PMID:23700436
A new local-global approach for classification.
Peres, R T; Pedreira, C E
2010-09-01
In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.
John E. Bater
1991-01-01
Techniques are described for the sampling and extraction of microarthropods from soil and the potential of these methods to extract the larval stages of the pear thrips, Taeniothrips inconsequens (Uzel), from soil cores taken in sugar maple stands. Also described is a design for an emergence trap that could be used to estimate adult thrips...
Detection of genetically modified soybean in crude soybean oil.
Nikolić, Zorica; Vasiljević, Ivana; Zdjelar, Gordana; Ðorđević, Vuk; Ignjatov, Maja; Jovičić, Dušica; Milošević, Dragana
2014-02-15
In order to detect presence and quantity of Roundup Ready (RR) soybean in crude oil extracted from soybean seed with a different percentage of GMO seed two extraction methods were used, CTAB and DNeasy Plant Mini Kit. The amplifications of lectin gene, used to check the presence of soybean DNA, were not achieved in all CTAB extracts of DNA, while commercial kit gave satisfactory results. Comparing actual and estimated GMO content between two extraction methods, root mean square deviation for kit is 0.208 and for CTAB is 2.127, clearly demonstrated superiority of kit over CTAB extraction. The results of quantification evidently showed that if the oil samples originate from soybean seed with varying percentage of RR, it is possible to monitor the GMO content at the first stage of processing crude oil. Copyright © 2013 Elsevier Ltd. All rights reserved.
Effect of wall-mediated hydrodynamic fluctuations on the kinetics of a Brownian nanoparticle
NASA Astrophysics Data System (ADS)
Yu, Hsiu-Yu; Eckmann, David M.; Ayyaswamy, Portonovo S.; Radhakrishnan, Ravi
2016-12-01
The reactive flux formalism (Chandler 1978 J. Chem. Phys. 68, 2959-2970. (doi:10.1063/1.436049)) and the subsequent development of methods such as transition path sampling have laid the foundation for explicitly quantifying the rate process in terms of microscopic simulations. However, explicit methods to account for how the hydrodynamic correlations impact the transient reaction rate are missing in the colloidal literature. We show that the composite generalized Langevin equation (Yu et al. 2015 Phys. Rev. E 91, 052303. (doi:10.1103/PhysRevE.91.052303)) makes a significant step towards solving the coupled processes of molecular reactions and hydrodynamic relaxation by examining how the wall-mediated hydrodynamic memory impacts the two-stage temporal relaxation of the reaction rate for a nanoparticle transition between two bound states in the bulk, near-wall and lubrication regimes.
Wildhaber, M.L.; Papoulias, D.M.; DeLonay, A.J.; Tillitt, D.E.; Bryan, J.L.; Annis, M.L.
2007-01-01
From May 2001 to June 2002 Wildhaber et al. (2005) conducted monthly sampling of Lower Missouri River shovelnose sturgeon (Scaphirhynchus platorynchus) to develop methods for determination of sex and the reproductive stage of sturgeons in the field. Shovelnose sturgeon were collected from the Missouri River and ultrasonic and endoscopic imagery and blood and gonadal tissue samples were taken. The full set of data was used to develop monthly reproductive stage profiles for S. platorynchus that could be compared to data collected on pallid sturgeon (Scaphirhynchus albus). This paper presents a comprehensive reference set of images, sex steroids, and vitellogenin (VTG, an egg protein precursor) data for assessing shovelnose sturgeon sex and reproductive stage. This reference set includes ultrasonic, endoscopic, histologic, and internal images of male and female gonads of shovelnose sturgeon at each reproductive stage along with complementary data on average 17-β estradiol, 11-ketotestosterone, VTG, gonadosomatic index, and polarization index.
Saed, Mohand O; Torbati, Amir H; Nair, Devatha P; Yakacki, Christopher M
2016-01-19
This study presents a novel two-stage thiol-acrylate Michael addition-photopolymerization (TAMAP) reaction to prepare main-chain liquid-crystalline elastomers (LCEs) with facile control over network structure and programming of an aligned monodomain. Tailored LCE networks were synthesized using routine mixing of commercially available starting materials and pouring monomer solutions into molds to cure. An initial polydomain LCE network is formed via a self-limiting thiol-acrylate Michael-addition reaction. Strain-to-failure and glass transition behavior were investigated as a function of crosslinking monomer, pentaerythritol tetrakis(3-mercaptopropionate) (PETMP). An example non-stoichiometric system of 15 mol% PETMP thiol groups and an excess of 15 mol% acrylate groups was used to demonstrate the robust nature of the material. The LCE formed an aligned and transparent monodomain when stretched, with a maximum failure strain over 600%. Stretched LCE samples were able to demonstrate both stress-driven thermal actuation when held under a constant bias stress or the shape-memory effect when stretched and unloaded. A permanently programmed monodomain was achieved via a second-stage photopolymerization reaction of the excess acrylate groups when the sample was in the stretched state. LCE samples were photo-cured and programmed at 100%, 200%, 300%, and 400% strain, with all samples demonstrating over 90% shape fixity when unloaded. The magnitude of total stress-free actuation increased from 35% to 115% with increased programming strain. Overall, the two-stage TAMAP methodology is presented as a powerful tool to prepare main-chain LCE systems and explore structure-property-performance relationships in these fascinating stimuli-sensitive materials.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design. Copyright © 2013 John Wiley & Sons, Ltd.
Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff
2016-01-01
We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.
NASA Astrophysics Data System (ADS)
Raghavan, V.; Whitney, Scott E.; Ebmeier, Ryan J.; Padhye, Nisha V.; Nelson, Michael; Viljoen, Hendrik J.; Gogos, George
2006-09-01
In this article, experimental and numerical analyses to investigate the thermal control of an innovative vortex tube based polymerase chain reaction (VT-PCR) thermocycler are described. VT-PCR is capable of rapid DNA amplification and real-time optical detection. The device rapidly cycles six 20μl 96bp λ-DNA samples between the PCR stages (denaturation, annealing, and elongation) for 30cycles in approximately 6min. Two-dimensional numerical simulations have been carried out using computational fluid dynamics (CFD) software FLUENT v.6.2.16. Experiments and CFD simulations have been carried out to measure/predict the temperature variation between the samples and within each sample. Heat transfer rate (primarily dictated by the temperature differences between the samples and the external air heating or cooling them) governs the temperature distribution between and within the samples. Temperature variation between and within the samples during the denaturation stage has been quite uniform (maximum variation around ±0.5 and 1.6°C, respectively). During cooling, by adjusting the cold release valves in the VT-PCR during some stage of cooling, the heat transfer rate has been controlled. Improved thermal control, which increases the efficiency of the PCR process, has been obtained both experimentally and numerically by slightly decreasing the rate of cooling. Thus, almost uniform temperature distribution between and within the samples (within 1°C) has been attained for the annealing stage as well. It is shown that the VT-PCR is a fully functional PCR machine capable of amplifying specific DNA target sequences in less time than conventional PCR devices.
Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?
Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R
2018-04-30
Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Prognostic markers for colorectal cancer: estimating ploidy and stroma
Danielsen, H E; Hveem, T S; Domingo, E; Pradhan, M; Kleppe, A; Syvertsen, R A; Kostolomov, I; Nesheim, J A; Askautrud, H A; Nesbakken, A; Lothe, R A; Svindland, A; Shepherd, N; Novelli, M; Johnstone, E; Tomlinson, I; Kerr, R; Kerr, D J
2018-01-01
Abstract Background We report here the prognostic value of ploidy and digital tumour-stromal morphometric analyses using material from 2624 patients with early stage colorectal cancer (CRC). Patients and methods DNA content (ploidy) and stroma-tumour fraction were estimated using automated digital imaging systems and DNA was extracted from sections of formalin-fixed paraffin-embedded (FFPE) tissue for analysis of microsatellite instability. Samples were available from 1092 patients recruited to the QUASAR 2 trial and two large observational series (Gloucester, n = 954; Oslo University Hospital, n = 578). Resultant biomarkers were analysed for prognostic impact using 5-year cancer-specific survival (CSS) as the clinical end point. Results Ploidy and stroma-tumour fraction were significantly prognostic in a multivariate model adjusted for age, adjuvant treatment, and pathological T-stage in stage II patients, and the combination of ploidy and stroma-tumour fraction was found to stratify these patients into three clinically useful groups; 5-year CSS 90% versus 83% versus 73% [hazard ratio (HR) = 1.77 (95% confidence interval (95% CI): 1.13–2.77) and HR = 2.95 (95% CI: 1.73–5.03), P < 0.001]. Conclusion A novel biomarker, combining estimates of ploidy and stroma-tumour fraction, sampled from FFPE tissue, identifies stage II CRC patients with low, intermediate or high risk of CRC disease specific death, and can reliably stratify clinically relevant patient sub-populations with differential risks of tumour recurrence and may support choice of adjuvant therapy for these individuals. PMID:29293881
Iachan, Ronaldo; H. Johnson, Christopher; L. Harding, Richard; Kyle, Tonja; Saavedra, Pedro; L. Frazier, Emma; Beer, Linda; L. Mattson, Christine; Skarbinski, Jacek
2016-01-01
Background: Health surveys of the general US population are inadequate for monitoring human immunodeficiency virus (HIV) infection because the relatively low prevalence of the disease (<0.5%) leads to small subpopulation sample sizes. Objective: To collect a nationally and locally representative probability sample of HIV-infected adults receiving medical care to monitor clinical and behavioral outcomes, supplementing the data in the National HIV Surveillance System. This paper describes the sample design and weighting methods for the Medical Monitoring Project (MMP) and provides estimates of the size and characteristics of this population. Methods: To develop a method for obtaining valid, representative estimates of the in-care population, we implemented a cross-sectional, three-stage design that sampled 23 jurisdictions, then 691 facilities, then 9,344 HIV patients receiving medical care, using probability-proportional-to-size methods. The data weighting process followed standard methods, accounting for the probabilities of selection at each stage and adjusting for nonresponse and multiplicity. Nonresponse adjustments accounted for differing response at both facility and patient levels. Multiplicity adjustments accounted for visits to more than one HIV care facility. Results: MMP used a multistage stratified probability sampling design that was approximately self-weighting in each of the 23 project areas and nationally. The probability sample represents the estimated 421,186 HIV-infected adults receiving medical care during January through April 2009. Methods were efficient (i.e., induced small, unequal weighting effects and small standard errors for a range of weighted estimates). Conclusion: The information collected through MMP allows monitoring trends in clinical and behavioral outcomes and informs resource allocation for treatment and prevention activities. PMID:27651851
Thermal behaviour and kinetics of coal/biomass blends during co-combustion.
Gil, M V; Casal, D; Pevida, C; Pis, J J; Rubiera, F
2010-07-01
The thermal characteristics and kinetics of coal, biomass (pine sawdust) and their blends were evaluated under combustion conditions using a non-isothermal thermogravimetric method (TGA). Biomass was blended with coal in the range of 5-80 wt.% to evaluate their co-combustion behaviour. No significant interactions were detected between the coal and biomass, since no deviations from their expected behaviour were observed in these experiments. Biomass combustion takes place in two steps: between 200 and 360 degrees C the volatiles are released and burned, and at 360-490 degrees C char combustion takes place. In contrast, coal is characterized by only one combustion stage at 315-615 degrees C. The coal/biomass blends presented three combustion steps, corresponding to the sum of the biomass and coal individual stages. Several solid-state mechanisms were tested by the Coats-Redfern method in order to find out the mechanisms responsible for the oxidation of the samples. The kinetic parameters were determined assuming single separate reactions for each stage of thermal conversion. The combustion process of coal consists of one reaction, whereas, in the case of the biomass and coal/biomass blends, this process consists of two or three independent reactions, respectively. The results showed that the chemical first order reaction is the most effective mechanism for the first step of biomass oxidation and for coal combustion. However, diffusion mechanisms were found to be responsible for the second step of biomass combustion. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Jiang, Wei; Yu, Weichuan
2017-01-01
In genome-wide association studies, we normally discover associations between genetic variants and diseases/traits in primary studies, and validate the findings in replication studies. We consider the associations identified in both primary and replication studies as true findings. An important question under this two-stage setting is how to determine significance levels in both studies. In traditional methods, significance levels of the primary and replication studies are determined separately. We argue that the separate determination strategy reduces the power in the overall two-stage study. Therefore, we propose a novel method to determine significance levels jointly. Our method is a reanalysis method that needs summary statistics from both studies. We find the most powerful significance levels when controlling the false discovery rate in the two-stage study. To enjoy the power improvement from the joint determination method, we need to select single nucleotide polymorphisms for replication at a less stringent significance level. This is a common practice in studies designed for discovery purpose. We suggest this practice is also suitable in studies with validation purpose in order to identify more true findings. Simulation experiments show that our method can provide more power than traditional methods and that the false discovery rate is well-controlled. Empirical experiments on datasets of five diseases/traits demonstrate that our method can help identify more associations. The R-package is available at: http://bioinformatics.ust.hk/RFdr.html .
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
Romanò, C L; Gala, L; Logoluso, N; Romanò, D; Drago, L
2012-12-01
The best method for treating chronic periprosthetic knee infection remains controversial. Randomized, comparative studies on treatment modalities are lacking. This systematic review of the literature compares the infection eradication rate after two-stage versus one-stage revision and static versus articulating spacers in two-stage procedures. We reviewed full-text papers and those with an abstract in English published from 1966 through 2011 that reported the success rate of infection eradication after one-stage or two-stage revision with two different types of spacers. In all, 6 original articles reporting the results after one-stage knee exchange arthoplasty (n = 204) and 38 papers reporting on two-stage revision (n = 1,421) were reviewed. The average success rate in the eradication of infection was 89.8% after a two-stage revision and 81.9% after a one-stage procedure at a mean follow-up of 44.7 and 40.7 months, respectively. The average infection eradication rate after a two-stage procedure was slightly, although significantly, higher when an articulating spacer rather than a static spacer was used (91.2 versus 87%). The methodological limitations of this study and the heterogeneous material in the studies reviewed notwithstanding, this systematic review shows that, on average, a two-stage procedure is associated with a higher rate of eradication of infection than one-stage revision for septic knee prosthesis and that articulating spacers are associated with a lower recurrence of infection than static spacers at a comparable mean duration of follow-up. IV.
Testing Standard Reliability Criteria
ERIC Educational Resources Information Center
Sherry, David
2017-01-01
Maul's paper, "Rethinking Traditional Methods of Survey Validation" (Andrew Maul), contains two stages. First he presents empirical results that cast doubt on traditional methods for validating psychological measurement instruments. These results motivate the second stage, a critique of current conceptions of psychological measurement…
Reza, Syed Azer; Qasim, Muhammad
2016-01-10
This paper presents a novel approach to simultaneously measuring the thickness and refractive index of a sample. The design uses an electronically controlled tunable lens (ECTL) and a microelectromechanical-system-based digital micromirror device (DMD). The method achieves the desired results by using the DMD to characterize the spatial profile of a Gaussian laser beam at different focal length settings of the ECTL. The ECTL achieves tunable lensing through minimal motion of liquid inside a transparent casing, whereas the DMD contains an array of movable micromirrors, which make it a reflective spatial light modulator. As the proposed system uses an ECTL, a DMD, and other fixed optical components, it measures the thickness and refractive index without requiring any motion of bulk components such as translational and rotational stages. A motion-free system improves measurement repeatability and reliability. Moreover, the measurement of sample thickness and refractive index can be completely automated because the ECTL and DMD are controlled through digital signals. We develop and discuss the theory in detail to explain the measurement methodology of the proposed system and present results from experiments performed to verify the working principle of the method. Refractive index measurement accuracies of 0.22% and 0.2% were achieved for two BK-7 glass samples used, and the thicknesses of the two samples were measured with a 0.1 mm accuracy for each sample, corresponding to a 0.39% and 0.78% measurement error, respectively, for the aforementioned samples.
A modified varying-stage adaptive phase II/III clinical trial design.
Dong, Gaohong; Vandemeulebroecke, Marc
2016-07-01
Conventionally, adaptive phase II/III clinical trials are carried out with a strict two-stage design. Recently, a varying-stage adaptive phase II/III clinical trial design has been developed. In this design, following the first stage, an intermediate stage can be adaptively added to obtain more data, so that a more informative decision can be made. Therefore, the number of further investigational stages is determined based upon data accumulated to the interim analysis. This design considers two plausible study endpoints, with one of them initially designated as the primary endpoint. Based on interim results, another endpoint can be switched as the primary endpoint. However, in many therapeutic areas, the primary study endpoint is well established. Therefore, we modify this design to consider one study endpoint only so that it may be more readily applicable in real clinical trial designs. Our simulations show that, the same as the original design, this modified design controls the Type I error rate, and the design parameters such as the threshold probability for the two-stage setting and the alpha allocation ratio in the two-stage setting versus the three-stage setting have a great impact on the design characteristics. However, this modified design requires a larger sample size for the initial stage, and the probability of futility becomes much higher when the threshold probability for the two-stage setting gets smaller. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Shang, Songhao
2012-01-01
Crop water requirement is essential for agricultural water management, which is usually available for crop growing stages. However, crop water requirement values of monthly or weekly scales are more useful for water management. A method was proposed to downscale crop coefficient and water requirement from growing stage to substage scales, which is based on the interpolation of accumulated crop and reference evapotranspiration calculated from their values in growing stages. The proposed method was compared with two straightforward methods, that is, direct interpolation of crop evapotranspiration and crop coefficient by assuming that stage average values occurred in the middle of the stage. These methods were tested with a simulated daily crop evapotranspiration series. Results indicate that the proposed method is more reliable, showing that the downscaled crop evapotranspiration series is very close to the simulated ones. PMID:22619572
Nagra, Navraj S; Hamilton, Thomas W; Ganatra, Sameer; Murray, David W; Pandit, Hemant
2016-10-01
Infection complicating total knee arthroplasty (TKA) has serious implications. Traditionally the debate on whether one- or two-stage exchange arthroplasty is the optimum management of infected TKA has favoured two-stage procedures; however, a paradigm shift in opinion is emerging. This study aimed to establish whether current evidence supports one-stage revision for managing infected TKA based on reinfection rates and functional outcomes post-surgery. MEDLINE/PubMed and CENTRAL databases were reviewed for studies that compared one- and two-stage exchange arthroplasty TKA in more than ten patients with a minimum 2-year follow-up. From an initial sample of 796, five cohort studies with a total of 231 patients (46 single-stage/185 two-stage; median patient age 66 years, range 61-71 years) met inclusion criteria. Overall, there were no significant differences in risk of reinfection following one- or two-stage exchange arthroplasty (OR -0.06, 95 % confidence interval -0.13, 0.01). Subgroup analysis revealed that in studies published since 2000, one-stage procedures have a significantly lower reinfection rate. One study investigated functional outcomes and reported that one-stage surgery was associated with superior functional outcomes. Scarcity of data, inconsistent study designs, surgical technique and antibiotic regime disparities limit recommendations that can be made. Recent studies suggest one-stage exchange arthroplasty may provide superior outcomes, including lower reinfection rates and superior function, in select patients. Clinically, for some patients, one-stage exchange arthroplasty may represent optimum treatment; however, patient selection criteria and key components of surgical and post-operative anti-microbial management remain to be defined. III.
Kohlmann, Alexander; Kipps, Thomas J; Rassenti, Laura Z; Downing, James R; Shurtleff, Sheila A; Mills, Ken I; Gilkes, Amanda F; Hofmann, Wolf-Karsten; Basso, Giuseppe; Dell’Orto, Marta Campo; Foà, Robin; Chiaretti, Sabina; De Vos, John; Rauhut, Sonja; Papenhausen, Peter R; Hernández, Jesus M; Lumbreras, Eva; Yeoh, Allen E; Koay, Evelyn S; Li, Rachel; Liu, Wei-min; Williams, Paul M; Wieczorek, Lothar; Haferlach, Torsten
2008-01-01
Gene expression profiling has the potential to enhance current methods for the diagnosis of haematological malignancies. Here, we present data on 204 analyses from an international standardization programme that was conducted in 11 laboratories as a prephase to the Microarray Innovations in LEukemia (MILE) study. Each laboratory prepared two cell line samples, together with three replicate leukaemia patient lysates in two distinct stages: (i) a 5-d course of protocol training, and (ii) independent proficiency testing. Unsupervised, supervised, and r2 correlation analyses demonstrated that microarray analysis can be performed with remarkably high intra-laboratory reproducibility and with comparable quality and reliability. PMID:18573112
ERIC Educational Resources Information Center
Levesque, Luc
2012-01-01
A method is proposed to simplify analytical computations of the transfer function for electrical circuit filters, which are made from repetitive identical stages. A method based on the construction of Pascal's triangle is introduced and then a general solution from two initial conditions is provided for the repetitive identical stage. The present…
Microfluidic, marker-free isolation of circulating tumor cells from blood samples
Karabacak, Nezihi Murat; Spuhler, Philipp S; Fachin, Fabio; Lim, Eugene J; Pai, Vincent; Ozkumur, Emre; Martel, Joseph M; Kojic, Nikola; Smith, Kyle; Chen, Pin-i; Yang, Jennifer; Hwang, Henry; Morgan, Bailey; Trautwein, Julie; Barber, Thomas A; Stott, Shannon L; Maheswaran, Shyamala; Kapur, Ravi; Haber, Daniel A; Toner, Mehmet
2014-01-01
The ability to isolate and analyze rare circulating tumor cells (CTCs) has the potential to further our understanding of cancer metastasis and enhance the care of cancer patients. In this protocol, we describe the procedure for isolating rare CTCs from blood samples by using tumor antigen–independent microfluidic CTC-iChip technology. The CTC-iChip uses deterministic lateral displacement, inertial focusing and magnetophoresis to sort up to 107 cells/s. By using two-stage magnetophoresis and depletion antibodies against leukocytes, we achieve 3.8-log depletion of white blood cells and a 97% yield of rare cells with a sample processing rate of 8 ml of whole blood/h. The CTC-iChip is compatible with standard cytopathological and RNA-based characterization methods. This protocol describes device production, assembly, blood sample preparation, system setup and the CTC isolation process. Sorting 8 ml of blood sample requires 2 h including setup time, and chip production requires 2–5 d. PMID:24577360
NASA Astrophysics Data System (ADS)
Baldi, Alfonso; Jacquot, Pierre
2003-05-01
Graphite-epoxy laminates are subjected to the "incremental hole-drilling" technique in order to investigate the residual stresses acting within each layer of the composite samples. In-plane speckle interferometry is used to measure the displacement field created by each drilling increment around the hole. Our approach features two particularities (1) we rely on the precise repositioning of the samples in the optical set-up after each new boring step, performed by means of a high precision, numerically controlled milling machine in the workshop; (2) for each increment, we acquire three displacement fields, along the length, the width of the samples, and at 45°, using a single symmetrical double beam illumination and a rotary stage holding the specimens. The experimental protocol is described in detail and the experimental results are presented, including a comparison with strain gages. Speckle interferometry appears as a suitable method to respond to the increasing demand for residual stress determination in composite samples.
Jeannot, Emmanuelle; Becette, Véronique; Campitelli, Maura; Calméjane, Marie-Ange; Lappartient, Emmanuelle; Ruff, Evelyne; Saada, Stéphanie; Holmes, Allyson; Bellet, Dominique; Sastre-Garau, Xavier
2016-10-01
Specific human papillomavirus genotypes are associated with most ano-genital carcinomas and a large subset of oro-pharyngeal carcinomas. Human papillomavirus DNA is thus a tumour marker that can be detected in the blood of patients for clinical monitoring. However, data concerning circulating human papillomavirus DNA in cervical cancer patients has provided little clinical value, due to insufficient sensitivity of the assays used for the detection of small sized tumours. Here we took advantage of the sensitive droplet digital PCR method to identify circulating human papillomavirus DNA in patients with human papillomavirus-associated carcinomas. A series of 70 serum specimens, taken at the time of diagnosis, between 2002 and 2013, were retrospectively analyzed in patients with human papillomavirus-16 or human papillomavirus-18-associated carcinomas, composed of 47 cases from the uterine cervix, 15 from the anal canal and 8 from the oro-pharynx. As negative controls, 18 serum samples from women with human papillomavirus-16-associated high-grade cervical intraepithelial neoplasia were also analyzed. Serum samples were stored at -80°C (27 cases) or at -20°C (43 cases). DNA was isolated from 200 µl of serum or plasma and droplet digital PCR was performed using human papillomavirus-16 E7 and human papillomavirus-18 E7 specific primers. Circulating human papillomavirus DNA was detected in 61/70 (87%) serum samples from patients with carcinoma and in no serum from patients with cervical intraepithelial neoplasia. The positivity rate increased to 93% when using only serum stored at -80°C. Importantly, the two patients with microinvasive carcinomas in this series were positive. Quantitative evaluation showed that circulating viral DNA levels in cervical cancer patients were related to the clinical stage and tumour size, ranging from 55 ± 85 copies/ml (stage I) to 1774 ± 3676 copies/ml (stage IV). Circulating human papillomavirus DNA is present in patients with human papillomavirus-associated invasive cancers even at sub-clinical stages and its level is related to tumour dynamics. Droplet digital PCR is a promising method for circulating human papillomavirus DNA detection and quantification. No positivity was found in patients with human papillomavirus-associated high grade cervical intraepithelial neoplasia.
Jeannot, Emmanuelle; Becette, Véronique; Campitelli, Maura; Calméjane, Marie‐Ange; Lappartient, Emmanuelle; Ruff, Evelyne; Saada, Stéphanie; Holmes, Allyson; Bellet, Dominique
2016-01-01
Abstract Specific human papillomavirus genotypes are associated with most ano‐genital carcinomas and a large subset of oro‐pharyngeal carcinomas. Human papillomavirus DNA is thus a tumour marker that can be detected in the blood of patients for clinical monitoring. However, data concerning circulating human papillomavirus DNA in cervical cancer patients has provided little clinical value, due to insufficient sensitivity of the assays used for the detection of small sized tumours. Here we took advantage of the sensitive droplet digital PCR method to identify circulating human papillomavirus DNA in patients with human papillomavirus‐associated carcinomas. A series of 70 serum specimens, taken at the time of diagnosis, between 2002 and 2013, were retrospectively analyzed in patients with human papillomavirus‐16 or human papillomavirus‐18‐associated carcinomas, composed of 47 cases from the uterine cervix, 15 from the anal canal and 8 from the oro‐pharynx. As negative controls, 18 serum samples from women with human papillomavirus‐16‐associated high‐grade cervical intraepithelial neoplasia were also analyzed. Serum samples were stored at −80°C (27 cases) or at −20°C (43 cases). DNA was isolated from 200 µl of serum or plasma and droplet digital PCR was performed using human papillomavirus‐16 E7 and human papillomavirus‐18 E7 specific primers. Circulating human papillomavirus DNA was detected in 61/70 (87%) serum samples from patients with carcinoma and in no serum from patients with cervical intraepithelial neoplasia. The positivity rate increased to 93% when using only serum stored at −80°C. Importantly, the two patients with microinvasive carcinomas in this series were positive. Quantitative evaluation showed that circulating viral DNA levels in cervical cancer patients were related to the clinical stage and tumour size, ranging from 55 ± 85 copies/ml (stage I) to 1774 ± 3676 copies/ml (stage IV). Circulating human papillomavirus DNA is present in patients with human papillomavirus‐associated invasive cancers even at sub‐clinical stages and its level is related to tumour dynamics. Droplet digital PCR is a promising method for circulating human papillomavirus DNA detection and quantification. No positivity was found in patients with human papillomavirus‐associated high grade cervical intraepithelial neoplasia. PMID:27917295
NASA Astrophysics Data System (ADS)
Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting
2015-08-01
Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method.
Wang, Tong-Hong; Chen, Tse-Ching; Teng, Xiao; Liang, Kung-Hao; Yeh, Chau-Ting
2015-08-11
Liver fibrosis assessment by biopsy and conventional staining scores is based on histopathological criteria. Variations in sample preparation and the use of semi-quantitative histopathological methods commonly result in discrepancies between medical centers. Thus, minor changes in liver fibrosis might be overlooked in multi-center clinical trials, leading to statistically non-significant data. Here, we developed a computer-assisted, fully automated, staining-free method for hepatitis B-related liver fibrosis assessment. In total, 175 liver biopsies were divided into training (n = 105) and verification (n = 70) cohorts. Collagen was observed using second harmonic generation (SHG) microscopy without prior staining, and hepatocyte morphology was recorded using two-photon excitation fluorescence (TPEF) microscopy. The training cohort was utilized to establish a quantification algorithm. Eleven of 19 computer-recognizable SHG/TPEF microscopic morphological features were significantly correlated with the ISHAK fibrosis stages (P < 0.001). A biphasic scoring method was applied, combining support vector machine and multivariate generalized linear models to assess the early and late stages of fibrosis, respectively, based on these parameters. The verification cohort was used to verify the scoring method, and the area under the receiver operating characteristic curve was >0.82 for liver cirrhosis detection. Since no subjective gradings are needed, interobserver discrepancies could be avoided using this fully automated method.
Panahbehagh, B.; Smith, D.R.; Salehi, M.M.; Hornbach, D.J.; Brown, D.J.; Chan, F.; Marinova, D.; Anderssen, R.S.
2011-01-01
Assessing populations of rare species is challenging because of the large effort required to locate patches of occupied habitat and achieve precise estimates of density and abundance. The presence of a rare species has been shown to be correlated with presence or abundance of more common species. Thus, ecological community richness or abundance can be used to inform sampling of rare species. Adaptive sampling designs have been developed specifically for rare and clustered populations and have been applied to a wide range of rare species. However, adaptive sampling can be logistically challenging, in part, because variation in final sample size introduces uncertainty in survey planning. Two-stage sequential sampling (TSS), a recently developed design, allows for adaptive sampling, but avoids edge units and has an upper bound on final sample size. In this paper we present an extension of two-stage sequential sampling that incorporates an auxiliary variable (TSSAV), such as community attributes, as the condition for adaptive sampling. We develop a set of simulations to approximate sampling of endangered freshwater mussels to evaluate the performance of the TSSAV design. The performance measures that we are interested in are efficiency and probability of sampling a unit occupied by the rare species. Efficiency measures the precision of population estimate from the TSSAV design relative to a standard design, such as simple random sampling (SRS). The simulations indicate that the density and distribution of the auxiliary population is the most important determinant of the performance of the TSSAV design. Of the design factors, such as sample size, the fraction of the primary units sampled was most important. For the best scenarios, the odds of sampling the rare species was approximately 1.5 times higher for TSSAV compared to SRS and efficiency was as high as 2 (i.e., variance from TSSAV was half that of SRS). We have found that design performance, especially for adaptive designs, is often case-specific. Efficiency of adaptive designs is especially sensitive to spatial distribution. We recommend that simulations tailored to the application of interest are highly useful for evaluating designs in preparation for sampling rare and clustered populations.
Ji, Dong Xu; Foong, Kelvin Weng Chiong; Ong, Sim Heng
2013-09-01
Extraction of the mandible from 3D volumetric images is frequently required for surgical planning and evaluation. Image segmentation from MRI is more complex than CT due to lower bony signal-to-noise. An automated method to extract the human mandible body shape from magnetic resonance (MR) images of the head was developed and tested. Anonymous MR images data sets of the head from 12 subjects were subjected to a two-stage rule-constrained region growing approach to derive the shape of the body of the human mandible. An initial thresholding technique was applied followed by a 3D seedless region growing algorithm to detect a large portion of the trabecular bone (TB) regions of the mandible. This stage is followed with a rule-constrained 2D segmentation of each MR axial slice to merge the remaining portions of the TB regions with lower intensity levels. The two-stage approach was replicated to detect the cortical bone (CB) regions of the mandibular body. The TB and CB regions detected from the preceding steps were merged and subjected to a series of morphological processes for completion of the mandibular body region definition. Comparisons of the accuracy of segmentation between the two-stage approach, conventional region growing method, 3D level set method, and manual segmentation were made with Jaccard index, Dice index, and mean surface distance (MSD). The mean accuracy of the proposed method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of CRG is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of the 3D level set method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The proposed method shows improvement in accuracy over CRG and 3D level set. Accurate segmentation of the body of the human mandible from MR images is achieved with the proposed two-stage rule-constrained seedless region growing approach. The accuracy achieved with the two-stage approach is higher than CRG and 3D level set.
NASA Astrophysics Data System (ADS)
Pavlova, Julia A.; Ivanov, Andrei V.; Maksimova, Natalia V.; Pokholok, Konstantin V.; Vasiliev, Alexander V.; Malakho, Artem P.; Avdeev, Victor V.
2018-05-01
Due to the macropore structure and the hydrophobic properties, exfoliated graphite (EG) is considered as a perspective sorbent for oil and liquid hydrocarbons from the water surface. However, there is the problem of EG collection from the water surface. One of the solutions is the modification of EG by a magnetic compound and the collection of EG with sorbed oil using the magnetic field. In this work, the method of the two-stage preparation of exfoliated graphite with ferrite phases is proposed. This method includes the impregnation of expandable graphite in the mixed solution of iron (III) chloride and cobalt (II) or nickel (II) nitrate in the first stage and the thermal exfoliation of impregnated expandable graphite with the formation of exfoliated graphite containing cobalt and nickel ferrites in the second stage. Such two-stage method makes it possible to obtain the sorbent based on EG modified by ferrimagnetic phases with high sorption capacity toward oil (up to 45-51 g/g) and high saturation magnetization (up to 42 emu/g). On the other hand, this method allows to produce the magnetic sorbent in a short period of time (up to 10 s) during which the thermal exfoliation is carried out in the air atmosphere.
Fibronectin on circulating extracellular vesicles as a liquid biopsy to detect breast cancer.
Moon, Pyong-Gon; Lee, Jeong-Eun; Cho, Young-Eun; Lee, Soo Jung; Chae, Yee Soo; Jung, Jin Hyang; Kim, In-San; Park, Ho Yong; Baek, Moon-Chang
2016-06-28
Extracellular vesicles (EVs) secreted from cancer cells have potential for generating cancer biomarker signatures. Fibronectin (FN) was selected as a biomarker candidate, due to the presence in surface on EVs secreted from human breast cancer cell lines. A subsequent study used two types of enzyme-linked immunosorbent assays (ELISA) to determine the presence of these proteins in plasma samples from disease-free individuals (n=70), patients with BC (n=240), BC patients after surgical resection (n=40), patients with benign breast tumor (n=55), and patients with non-cancerous diseases (thyroiditis, gastritis, hepatitis B, and rheumatoid arthritis; n=80). FN levels were significantly elevated (p< .0001) at all stages of BC, and returned to normal after tumor removal. The diagnostic accuracy for FN detection in extracellular vesicles (ELISA method 1) (area under the curve, 0.81; 95% CI, 0.76 to 0.86; sensitivity of 65.1% and specificity of 83.2%) were also better than those for FN detection in the plasma (ELISA method 2) (area under the curve, 0.77; 95% CI, 0.72 to 0.83; sensitivity of 69.2% and specificity of 73.3%) in BC. The diagnostic accuracy of plasma FN was similar in both the early-stage BC and all BC patients, as well as in the two sets. This liquid biopsy to detect FN on circulating EVs could be a promising method to detect early breast cancer.
Xu, Fuchao; García-Bermejo, Ángel; Malarvannan, Govindan; Gómara, Belén; Neels, Hugo; Covaci, Adrian
2015-07-03
A multi-residue analytical method was developed for the determination of a range of flame retardants (FRs), including polybrominated diphenyl ethers (PBDEs), emerging halogenated FRs (EFRs) and organophosphate FRs (PFRs), in food matrices. An ultrasonication and vacuum assisted extraction (UVAE), followed by a multi-stage clean-up procedure, enabled the removal of up to 1g of lipid from 2.5 g of freeze-dried food samples and significantly reduce matrix effects. UVAE achieves a waste factor (WF) of about 10%, while the WFs of classical QuEChERS methods range usually between 50 and 90%. The low WF of UVAE leads to a dramatic improvement in the sensitivity along with saving up to 90% of spiking (internal) standards. Moreover, a two-stage clean-up on Florisil and aminopropyl silica was introduced after UVAE, for an efficient removal of pigments and residual lipids, which led to cleaner extracts than normally achieved by dispersive solid phase extraction (d-SPE). In this way, the extracts could be concentrated to low volumes, e.g. <100 μL and the equivalent matrix concentrations were up to 100g ww/mL. The final analysis of PFRs was performed on GC-EI-MS, while PBDEs and EFRs were measured by GC-ECNI-MS. Validation tests were performed with three food matrices (lean beef, whole chicken egg and salmon filet), obtaining acceptable recoveries (66-135%) with good repeatability (RSD 1-24%, mean 7%). Method LOQs ranged between 0.008 and 0.04 ng/g dw for PBDEs, between 0.08 and 0.20 ng/g dw for EFRs, and between 1.4 and 3.6 ng/g dw for PFRs. The method was further applied to eight types of food samples (including meat, eggs, fish, and seafood) with lipid contents ranging from 0.1 to 22%. Various FRs were detected above MLOQ levels, demonstrating the wide-range applicability of our method. To the best of our knowledge, this is the first method reported for simultaneous analysis of brominated and organophosphate FRs in food matrices. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kelly, Steve E.
The accuracy and precision of a new Isolok sampler configuration was evaluated using a recirculation flow loop. The evaluation was performed using two slurry simulants of Hanford high-level tank waste. Through testing, the capability of the Isolok sampler was evaluated. Sample concentrations were compared to reference samples that were simultaneously collected by a two-stage Vezin sampler. The capability of the Isolok sampler to collect samples that accurately reflect the contents in the test loop improved – biases between the Isolok and Vezin samples were greatly reduce for fast settling particles.
Efficiency of a new bioaerosol sampler in sampling Betula pollen for antigen analyses.
Rantio-Lehtimäki, A; Kauppinen, E; Koivikko, A
1987-01-01
A new bioaerosol sampler consisting of Liu-type atmospheric aerosol sampling inlet, coarse particle inertial impactor, two-stage high-efficiency virtual impactor (aerodynamic particle sizes respectively in diameter: greater than or equal to 8 microns, 8-2.5 microns, and 2.5 microns; sampling on filters) and a liquid-cooled condenser was designed, fabricated and field-tested in sampling birch (Betula) pollen grains and smaller particles containing Betula antigens. Both microscopical (pollen counts) and immunochemical (enzyme-linked immunosorbent assay) analyses of each stage were carried out. The new sampler was significantly more efficient than Burkard trap e.g. in sampling particles of Betula pollen size (ca. 25 microns in diameter). This was prominent during pollen peak periods (e.g. May 19th, 1985, in the virtual impactor 9482 and in the Burkard trap 2540 Betula p.g. X m-3 of air). Betula antigens were detected also in filter stages where no intact pollen grains were found; in the condenser unit the antigen concentrations instead were very low.
A versatile rotary-stage high frequency probe station for studying magnetic films and devices
NASA Astrophysics Data System (ADS)
He, Shikun; Meng, Zhaoliang; Huang, Lisen; Yap, Lee Koon; Zhou, Tiejun; Panagopoulos, Christos
2016-07-01
We present a rotary-stage microwave probe station suitable for magnetic films and spintronic devices. Two stages, one for field rotation from parallel to perpendicular to the sample plane (out-of-plane) and the other intended for field rotation within the sample plane (in-plane) have been designed. The sample probes and micro-positioners are rotated simultaneously with the stages, which allows the field orientation to cover θ from 0∘ to 90∘ and φ from 0∘ to 360∘. θ and φ being the angle between the direction of current flow and field in a out-of-plane and an in-plane rotation, respectively. The operation frequency is up to 40 GHz and the magnetic field up to 1 T. The sample holder vision system and probe assembly are compactly designed for the probes to land on a wafer with diameter up to 3 cm. Using homemade multi-pin probes and commercially available high frequency probes, several applications including 4-probe DC measurements, the determination of domain wall velocity, and spin transfer torque ferromagnetic resonance are demonstrated.
Detection of EGFR mutations with mutation-specific antibodies in stage IV non-small-cell lung cancer
2010-01-01
Background Immunohistochemistry (IHC) with mutation-specific antibodies may be an ancillary method of detecting EGFR mutations in lung cancer patients. Methods EGFR mutation status was analyzed by DNA assays, and compared with IHC results in five non-small-cell lung cancer (NSCLC) cell lines and tumor samples from 78 stage IV NSCLC patients. Results IHC correctly identified del 19 in the H1650 and PC9 cell lines, L858R in H1975, and wild-type EGFR in H460 and A549, as well as wild-type EGFR in tumor samples from 22 patients. IHC with the mAb against EGFR with del 19 was highly positive for the protein in all 17 patients with a 15-bp (ELREA) deletion in exon 19, whereas in patients with other deletions, IHC was weakly positive in 3 cases and negative in 9 cases. IHC with the mAb against the L858R mutation showed high positivity for the protein in 25/27 (93%) patients with exon 21 EGFR mutations (all with L858R) but did not identify the L861Q mutation in the remaining two patients. Conclusions IHC with mutation-specific mAbs against EGFR is a promising method for detecting EGFR mutations in NSCLC patients. However these mAbs should be validated with additional studies to clarify their possible role in routine clinical practice for screening EGFR mutations in NSCLC patients. PMID:21167064
Schramm, Catherine; Vial, Céline; Bachoud-Lévi, Anne-Catherine; Katsahian, Sandrine
2018-01-01
Heterogeneity in treatment efficacy is a major concern in clinical trials. Clustering may help to identify the treatment responders and the non-responders. In the context of longitudinal cluster analyses, sample size and variability of the times of measurements are the main issues with the current methods. Here, we propose a new two-step method for the Clustering of Longitudinal data by using an Extended Baseline. The first step relies on a piecewise linear mixed model for repeated measurements with a treatment-time interaction. The second step clusters the random predictions and considers several parametric (model-based) and non-parametric (partitioning, ascendant hierarchical clustering) algorithms. A simulation study compares all options of the clustering of longitudinal data by using an extended baseline method with the latent-class mixed model. The clustering of longitudinal data by using an extended baseline method with the two model-based algorithms was the more robust model. The clustering of longitudinal data by using an extended baseline method with all the non-parametric algorithms failed when there were unequal variances of treatment effect between clusters or when the subgroups had unbalanced sample sizes. The latent-class mixed model failed when the between-patients slope variability is high. Two real data sets on neurodegenerative disease and on obesity illustrate the clustering of longitudinal data by using an extended baseline method and show how clustering may help to identify the marker(s) of the treatment response. The application of the clustering of longitudinal data by using an extended baseline method in exploratory analysis as the first stage before setting up stratified designs can provide a better estimation of treatment effect in future clinical trials.
Ludovini, Vienna; Bianconi, Fortunato; Siggillino, Annamaria; Piobbico, Danilo; Vannucci, Jacopo; Metro, Giulio; Chiari, Rita; Bellezza, Guido; Puma, Francesco; Della Fazia, Maria Agnese; Servillo, Giuseppe; Crinò, Lucio
2016-05-24
Risk assessment and treatment choice remains a challenge in early non-small-cell lung cancer (NSCLC). The aim of this study was to identify novel genes involved in the risk of early relapse (ER) compared to no relapse (NR) in resected lung adenocarcinoma (AD) patients using a combination of high throughput technology and computational analysis. We identified 18 patients (n.13 NR and n.5 ER) with stage I AD. Frozen samples of patients in ER, NR and corresponding normal lung (NL) were subjected to Microarray technology and quantitative-PCR (Q-PCR). A gene network computational analysis was performed to select predictive genes. An independent set of 79 ADs stage I samples was used to validate selected genes by Q-PCR.From microarray analysis we selected 50 genes, using the fold change ratio of ER versus NR. They were validated both in pool and individually in patient samples (ER and NR) by Q-PCR. Fourteen increased and 25 decreased genes showed a concordance between two methods. They were used to perform a computational gene network analysis that identified 4 increased (HOXA10, CLCA2, AKR1B10, FABP3) and 6 decreased (SCGB1A1, PGC, TFF1, PSCA, SPRR1B and PRSS1) genes. Moreover, in an independent dataset of ADs samples, we showed that both high FABP3 expression and low SCGB1A1 expression was associated with a worse disease-free survival (DFS).Our results indicate that it is possible to define, through gene expression and computational analysis, a characteristic gene profiling of patients with an increased risk of relapse that may become a tool for patient selection for adjuvant therapy.
Two-stage atlas subset selection in multi-atlas based image segmentation.
Zhao, Tingting; Ruan, Dan
2015-06-01
Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winschel, R. A.; Robbins, G. A.; Burke, F. P.
1986-11-01
Conoco Coal Research Division is characterizing samples of direct coal liquefaction process oils based on a variety of analytical techniques to provide a detailed description of the chemical composition of the oils to more fully understand the interrelationship of process oil composition and process operations, to aid in plant operation, and to lead to process improvements. The approach taken is to obtain analyses of a large number of well-defined process oils taken during periods of known operating conditions and known process performance. A set of thirty-one process oils from the Hydrocarbon Research, Inc. (HRI) Catalytic Two-Stage Liquefaction (CTSL) bench unitmore » was analyzed to provide information on process performance. The Fourier-Transform infrared (FTIR) spectroscopic method for the determination of phenolics in cola liquids was further verified. A set of four tetahydrofuran-soluble products from Purdue Research Foundation's reactions of coal/potassium/crown ether, analyzed by GC/MS and FTIR, were found to consist primarily of paraffins (excluding contaminants). Characterization data (elemental analyses, /sup 1/H-NMR and phenolic concentrations) were obtained on a set of twenty-seven two-stage liquefaction oils. Two activities were begun but not completed. First, analyses were started on oils from Wilsonville Run 250 (close- coupled ITSL). Also, a carbon isotopic method is being examined for utility in determining the relative proportion of coal and petroleum products in coprocessing oils.« less
Magnetic Gauge Instrumentation on the LANL Gas-Driven Two-Stage Gun
NASA Astrophysics Data System (ADS)
Alcon, R. R.; Sheffield, S. A.; Martinez, A. R.; Gustavsen, R. L.
1997-07-01
Our gas-driven two-stage gun was designed and built to do initiation studies on insensitive high explosives as well as other equation of state experiments on inert materials. Our preferred method of measuring initiation phenomena involves the use of in-situ magnetic particle velocity gauges. In order to provide the 1-D experimental area to accommodate this type of gauging in our two-stage gun, it has a 50-mm-diameter launch tube. We have used magnetic gauging on our 72-mm bore diameter single-stage gun for over 15 years and it has proven a very effective technique for all types of shock wave experiments, including those on high explosives. This technique has now been installed on our two-stage gun. We describe the experimental method, as well as some of the difficulties that arose during the installation. Several magnetic gauge experiments have been completed on plastic and high explosive materials. Waveforms obtained in some of the experiments will be discussed. Up to 10 in-situ particle velocity measurements can be made in a single experiment. This new technique is now working quite well, as is evidenced by the data. To our knowledge, this is the first time magnetic gauging has been used on a two-stage gun.
Research on the comparison of performance-based concept and force-based concept
NASA Astrophysics Data System (ADS)
Wu, Zeyu; Wang, Dongwei
2011-03-01
There are two ideologies about structure design: force-based concept and performance-based concept. Generally, if the structure operates during elastic stage, the two philosophies usually attain the same results. But beyond that stage, the shortage of force-based method is exposed, and the merit of performance-based is displayed. Pros and cons of each strategy are listed herein, and then which structure is best suitable to each method analyzed. At last, a real structure is evaluated by adaptive pushover method to verify that performance-based method is better than force-based method.
Fell, Shari; Bröckl, Stephanie; Büttner, Mathias; Rettinger, Anna; Zimmermann, Pia; Straubinger, Reinhard K
2016-09-15
Bovine tuberculosis (bTB), which is caused by Mycobacterium bovis and M. caprae, is a notifiable animal disease in Germany. Diagnostic procedure is based on a prescribed protocol that is published in the framework of German bTB legislation. In this protocol small sample volumes are used for DNA extraction followed by real-time PCR analyses. As mycobacteria tend to concentrate in granuloma and the infected tissue in early stages of infection does not necessarily show any visible lesions, it is likely that DNA extraction from only small tissue samples (20-40 mg) of a randomly chosen spot from the organ and following PCR testing may result in false negative results. In this study two DNA extraction methods were developed to process larger sample volumes to increase the detection sensitivity of mycobacterial DNA in animal tissue. The first extraction method is based on magnetic capture, in which specific capture oligonucleotides were utilized. These nucleotides are linked to magnetic particles and capture Mycobacterium-tuberculosis-complex (MTC) DNA released from 10 to 15 g of tissue material. In a second approach remaining sediments from the magnetic capture protocol were further processed with a less complex extraction protocol that can be used in daily routine diagnostics. A total number of 100 tissue samples from 34 cattle (n = 74) and 18 red deer (n = 26) were analyzed with the developed protocols and results were compared to the prescribed protocol. All three extraction methods yield reliable results by the real-time PCR analysis. The use of larger sample volume led to a sensitivity increase of DNA detection which was shown by the decrease of Ct-values. Furthermore five samples which were tested negative or questionable by the official extraction protocol were detected positive by real time PCR when the alternative extraction methods were used. By calculating the kappa index, the three extraction protocols resulted in a moderate (0.52; protocol 1 vs 3) to almost perfect agreement (1.00; red deer sample testing with all protocols). Both new methods yielded increased detection rates for MTC DNA detection in large sample volumes and consequently improve the official diagnostic protocol.
Molecular comparison of the sampling efficiency of four types of airborne bacterial samplers.
Li, Kejun
2011-11-15
In the present study, indoor and outdoor air samples were collected using four types of air samplers often used for airborne bacterial sampling. These air samplers included two solid impactors (BioStage and RCS), one liquid impinger (BioSampler), and one filter sampler with two kinds of filters (a gelatin and a cellulose acetate filter). The collected air samples were further processed to analyze the diversity and abundance of culturable bacteria and total bacteria through standard culture techniques, denaturing gradient gel electrophoresis (DGGE) fingerprinting and quantitative polymerase chain reaction (qPCR) analysis. The DGGE analysis indicated that the air samples collected using the BioStage and RCS samplers have higher culturable bacterial diversity, whereas the samples collected using the BioSampler and the cellulose acetate filter sampler have higher total bacterial diversity. To obtain more information on the sampled bacteria, some gel bands were excised and sequenced. In terms of sampling efficiency, results from the qPCR tests indicated that the collected total bacterial concentration was higher in samples collected using the BioSampler and the cellulose acetate filter sampler. In conclusion, the sampling bias and efficiency of four kinds of air sampling systems were compared in the present study and the two solid impactors were concluded to be comparatively efficient for culturable bacterial sampling, whereas the liquid impactor and the cellulose acetate filter sampler were efficient for total bacterial sampling. Copyright © 2011 Elsevier B.V. All rights reserved.
Wall, Leona; Mohr, Annika; Ripoli, Florenza Lüder; Schulze, Nayeli; Penter, Camila Duarte; Hungerbuehler, StephanOscar; Bach, Jan-Peter; Lucas, Karin
2018-01-01
Exercise intolerance is the first symptom of heart disease. Yet an objective and standardised method in canine cardiology to assess exercise capacity in a clinical setting is lacking. In contrast, exercise testing is a powerful diagnostic tool in humans, providing valuable information on prognosis and impact of therapeutic intervention. To investigate whether an exercise test reveals differences between dogs with early stage mitral regurgitation (MR) and dogs without cardiac disease, 12 healthy beagles (healthy group, HG) and 12 dogs with presymptomatic MR (CHIEF B1 / B2, patient group, PG) underwent a six-stage submaximal exercise test (ET) on a motorised treadmill. They trotted in their individual comfort speed for three minutes per stage, first without incline, afterwards increasing it by 4% for every subsequent stage. Blood samples were taken at rest and during two 3-minute breaks in the course of the test. Further samples were taken after the completion of the exercise test and again after a 3-hour recovery period. Measured parameters included heart rate, lactate and the cardiac biomarkers N-terminal pro-B-Type natriuretic peptide and cardiac Troponin I. The test was performed again under the same conditions in the same dogs three weeks after the first trial to evaluate individual repeatability. Cardiac biomarkers increased significantly in both HG and PG in the course of the test. The increase was more pronounced in CHIEF B1 / B2 dogs than in the HG. N-terminal pro-B-Type natriuretic peptide increased from 435 ± 195 to 523 ± 239 pmol/L (HG) and from 690 to 815 pmol/L (PG). cTnI increased from 0.020 to 0.024 ng/mL (HG) and from 0.06 to 0.08 ng/ml (PG). The present study provides a method to assess exercise-induced changes in cardiac biomarkers under clinical conditions. The increase of NT-proBNP and cTnI is more pronounced in dogs with early-stage MR than in healthy dogs. Results indicate that measuring the parameters before and after exercise is adequate and taking blood samples between the different stages of the ET does not provide additional information. Also, stress echocardiography was inconclusive. It can be concluded that exercise testing, especially in combination with measuring cardiac biomarkers, could be a helpful diagnostic tool in canine cardiology. PMID:29902265
Wall, Leona; Mohr, Annika; Ripoli, Florenza Lüder; Schulze, Nayeli; Penter, Camila Duarte; Hungerbuehler, StephanOscar; Bach, Jan-Peter; Lucas, Karin; Nolte, Ingo
2018-01-01
Exercise intolerance is the first symptom of heart disease. Yet an objective and standardised method in canine cardiology to assess exercise capacity in a clinical setting is lacking. In contrast, exercise testing is a powerful diagnostic tool in humans, providing valuable information on prognosis and impact of therapeutic intervention. To investigate whether an exercise test reveals differences between dogs with early stage mitral regurgitation (MR) and dogs without cardiac disease, 12 healthy beagles (healthy group, HG) and 12 dogs with presymptomatic MR (CHIEF B1 / B2, patient group, PG) underwent a six-stage submaximal exercise test (ET) on a motorised treadmill. They trotted in their individual comfort speed for three minutes per stage, first without incline, afterwards increasing it by 4% for every subsequent stage. Blood samples were taken at rest and during two 3-minute breaks in the course of the test. Further samples were taken after the completion of the exercise test and again after a 3-hour recovery period. Measured parameters included heart rate, lactate and the cardiac biomarkers N-terminal pro-B-Type natriuretic peptide and cardiac Troponin I. The test was performed again under the same conditions in the same dogs three weeks after the first trial to evaluate individual repeatability. Cardiac biomarkers increased significantly in both HG and PG in the course of the test. The increase was more pronounced in CHIEF B1 / B2 dogs than in the HG. N-terminal pro-B-Type natriuretic peptide increased from 435 ± 195 to 523 ± 239 pmol/L (HG) and from 690 to 815 pmol/L (PG). cTnI increased from 0.020 to 0.024 ng/mL (HG) and from 0.06 to 0.08 ng/ml (PG). The present study provides a method to assess exercise-induced changes in cardiac biomarkers under clinical conditions. The increase of NT-proBNP and cTnI is more pronounced in dogs with early-stage MR than in healthy dogs. Results indicate that measuring the parameters before and after exercise is adequate and taking blood samples between the different stages of the ET does not provide additional information. Also, stress echocardiography was inconclusive. It can be concluded that exercise testing, especially in combination with measuring cardiac biomarkers, could be a helpful diagnostic tool in canine cardiology.
Shi, Haolun; Yin, Guosheng
2018-02-21
Simon's two-stage design is one of the most commonly used methods in phase II clinical trials with binary endpoints. The design tests the null hypothesis that the response rate is less than an uninteresting level, versus the alternative hypothesis that the response rate is greater than a desirable target level. From a Bayesian perspective, we compute the posterior probabilities of the null and alternative hypotheses given that a promising result is declared in Simon's design. Our study reveals that because the frequentist hypothesis testing framework places its focus on the null hypothesis, a potentially efficacious treatment identified by rejecting the null under Simon's design could have only less than 10% posterior probability of attaining the desirable target level. Due to the indifference region between the null and alternative, rejecting the null does not necessarily mean that the drug achieves the desirable response level. To clarify such ambiguity, we propose a Bayesian enhancement two-stage (BET) design, which guarantees a high posterior probability of the response rate reaching the target level, while allowing for early termination and sample size saving in case that the drug's response rate is smaller than the clinically uninteresting level. Moreover, the BET design can be naturally adapted to accommodate survival endpoints. We conduct extensive simulation studies to examine the empirical performance of our design and present two trial examples as applications. © 2018, The International Biometric Society.
Variability estimation of urban wastewater biodegradable fractions by respirometry.
Lagarde, Fabienne; Tusseau-Vuillemin, Marie-Hélène; Lessard, Paul; Héduit, Alain; Dutrop, François; Mouchel, Jean-Marie
2005-11-01
This paper presents a methodology for assessing the variability of biodegradable chemical oxygen demand (COD) fractions in urban wastewaters. Thirteen raw wastewater samples from combined and separate sewers feeding the same plant were characterised, and two optimisation procedures were applied in order to evaluate the variability in biodegradable fractions and related kinetic parameters. Through an overall optimisation on all the samples, a unique kinetic parameter set was obtained with a three-substrate model including an adsorption stage. This method required powerful numerical treatment, but improved the identifiability problem compared to the usual sample-to-sample optimisation. The results showed that the fractionation of samples collected in the combined sewer was much more variable (standard deviation of 70% of the mean values) than the fractionation of the separate sewer samples, and the slowly biodegradable COD fraction was the most significant fraction (45% of the total COD on average). Because these samples were collected under various rain conditions, the standard deviations obtained here on the combined sewer biodegradable fractions could be used as a first estimation of the variability of this type of sewer system.
A Science and Risk-Based Pragmatic Methodology for Blend and Content Uniformity Assessment.
Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Doshi, Chetan
2018-04-01
This paper describes a pragmatic approach that can be applied in assessing powder blend and unit dosage uniformity of solid dose products at Process Design, Process Performance Qualification, and Continued/Ongoing Process Verification stages of the Process Validation lifecycle. The statistically based sampling, testing, and assessment plan was developed due to the withdrawal of the FDA draft guidance for industry "Powder Blends and Finished Dosage Units-Stratified In-Process Dosage Unit Sampling and Assessment." This paper compares the proposed Grouped Area Variance Estimate (GAVE) method with an alternate approach outlining the practicality and statistical rationalization using traditional sampling and analytical methods. The approach is designed to fit solid dose processes assuring high statistical confidence in both powder blend uniformity and dosage unit uniformity during all three stages of the lifecycle complying with ASTM standards as recommended by the US FDA.
Discriminative motif discovery via simulated evolution and random under-sampling.
Song, Tao; Gu, Hong
2014-01-01
Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs) training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.
Improved DESI-MS Performance using Edge Sampling and aRotational Sample Stage
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertesz, Vilmos; Van Berkel, Gary J
2008-01-01
The position of the surface to be analyzed relative to the sampling orifice or capillary into the mass spectrometer has been known to dramatically affect the observed signal levels in desorption electrospray ionization mass spectrometry (DESIMS). In analyses of sample spots on planar surfaces, DESI-MS signal intensities as much as five times greater were routinely observed when the bottom of the sampling capillary was appropriately positioned beneath the surface plane ( edge sampling") compared to when the capillary just touched the surface. To take advantage of the optimum "edge sampling" geometry and to maximize the number of samples that couldmore » be analyzed in this configuration, a rotational sample stage was integrated into a typical DESI-MS setup. The rapid quantitative determination of caffeine in two diet sport drinks (Diet Turbo Tea, Speed Stack Grape) spiked with an isotopically labeled internal standard demonstrated the utility of this approach.« less
Lock-in thermal imaging for the early-stage detection of cutaneous melanoma: a feasibility study.
Bonmarin, Mathias; Le Gal, Frédérique-Anne
2014-04-01
This paper theoretically evaluates lock-in thermal imaging for the early-stage detection of cutaneous melanoma. Lock-in thermal imaging is based on the periodic thermal excitation of the specimen under test. Resulting surface temperature oscillations are recorded with an infrared camera and allow the detection of variations of the sample's thermophysical properties under the surface. In this paper, the steady-state and transient skin surface temperatures are numerically derived for a different stage of development of the melanoma lesion using a two-dimensional axisymmetric multilayer heat-transfer model. The transient skin surface temperature signals are demodulated according to the digital lock-in principle to compute both a phase and an amplitude image of the lesions. The phase image can be advantageously used to accurately detect cutaneous melanoma at an early stage of development while the maximal phase shift can give precious information about the lesion invasion depth. The ability of lock-in thermal imaging to suppress disturbing subcutaneous thermal signals is demonstrated. The method is compared with the previously proposed pulse-based approaches, and the influence of the modulation frequency is further discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Center, Yola; And Others
The study used a multiple case study method to investigate the quality of the educational and social experiences of elementary-level and secondary-level children with disabilities currently integrated within the Australian regular school system. This second stage of the study used for its sample 23 children with intellectual disabilities, 18 with…
A Two-Stage Composition Method for Danger-Aware Services Based on Context Similarity
NASA Astrophysics Data System (ADS)
Wang, Junbo; Cheng, Zixue; Jing, Lei; Ota, Kaoru; Kansen, Mizuo
Context-aware systems detect user's physical and social contexts based on sensor networks, and provide services that adapt to the user accordingly. Representing, detecting, and managing the contexts are important issues in context-aware systems. Composition of contexts is a useful method for these works, since it can detect a context by automatically composing small pieces of information to discover service. Danger-aware services are a kind of context-aware services which need description of relations between a user and his/her surrounding objects and between users. However when applying the existing composition methods to danger-aware services, they show the following shortcomings that (1) they have not provided an explicit method for representing composition of multi-user' contexts, (2) there is no flexible reasoning mechanism based on similarity of contexts, so that they can just provide services exactly following the predefined context reasoning rules. Therefore, in this paper, we propose a two-stage composition method based on context similarity to solve the above problems. The first stage is composition of the useful information to represent the context for a single user. The second stage is composition of multi-users' contexts to provide services by considering the relation of users. Finally the danger degree of the detected context is computed by using context similarity between the detected context and the predefined context. Context is dynamically represented based on two-stage composition rules and a Situation theory based Ontology, which combines the advantages of Ontology and Situation theory. We implement the system in an indoor ubiquitous environment, and evaluate the system through two experiments with the support of subjects. The experiment results show the method is effective, and the accuracy of danger detection is acceptable to a danger-aware system.
Leadership and management curriculum planning for Iranian general practitioners.
Khosravan, Shahla; Karimi Moonaghi, Hossein; Yazdani, Shahram; Ahmadi, Soleiman; Mansoorian, Mohammad Reza
2015-10-01
Leadership and management are two expected features and competencies for general practitioners (GPs). The purpose of this study was leadership and management curriculum planning for GPs which was performed based on Kern's curriculum planning cycle. This study was conducted in 2011- 2012 in Iran using an explanatory mixed-methods approach. It was conducted through an initial qualitative phase using two focus group discussions and 28 semi-structured interviews with key informants to capture their experiences and viewpoints about the necessity of management courses for undergraduate medical students, goals, objectives, and educational strategies according to Kern's curriculum planning cycle. The data was used to develop a questionnaire to be used in a quantitative written survey. Results of these two phases and that of the review of medical curriculum in other countries and management curriculum of other medical disciplines in Iran were used in management and leadership curriculum planning. In the qualitative phase, purposeful sampling and content analysis with constant comparison based on Strauss and Corbin's method were used; descriptive and analytic tests were used for quantitative data by SPSS version 14. In the qualitatively stage of this research, 6 main categories including the necessity of management course, features and objectives of management curriculum, proper educational setting, educational methods and strategies, evolutionary method and feedback result were determined. In the quantitatively stage of the research, from the viewpoints of 51.6% of 126 units of research who filled out the questionnaire, ranked high necessary of management courses. The coordination of care and clinical leadership was determined as the most important role for GPs with a mean of 6.2 from sample viewpoint. Also, team working and group dynamics had the first priority related to the principles and basics of management with a mean of 3.59. Other results were shown in the paper. Results of this study indicated the need to provide educational programs for GPs; it led to a systematic curriculum theory and clinical management using Kern cycle for general practitioner's discipline. Implementation and evaluation of this program is recommended.
NASA Astrophysics Data System (ADS)
Kalabukhov, D. S.; Radko, V. M.; Grigoriev, V. A.
2018-01-01
Ultra-low power turbine drives are used as energy sources in auxiliary power systems, energy units, terrestrial, marine, air and space transport within the confines of shaft power N td = 0.01…10 kW. In this paper we propose a new approach to the development of surrogate models for evaluating the integrated efficiency of multistage ultra-low power impulse turbine with pressure stages. This method is based on the use of existing mathematical models of ultra-low power turbine stage efficiency and mass. It has been used in a method for selecting the rational parameters of two-stage axial ultra-low power turbine. The article describes the basic features of an algorithm for two-stage turbine parameters optimization and for efficiency criteria evaluating. Pledged mathematical models are intended for use at the preliminary design of turbine drive. The optimization method was tested at preliminary design of an air starter turbine. Validation was carried out by comparing the results of optimization calculations and numerical gas-dynamic simulation in the Ansys CFX package. The results indicate a sufficient accuracy of used surrogate models for axial two-stage turbine parameters selection
High Resolution Seamless Dom Generation Over CHANG'E-5 Landing Area Using Lroc Nac Images
NASA Astrophysics Data System (ADS)
Di, K.; Jia, M.; Xin, X.; Liu, B.; Liu, Z.; Peng, M.; Yue, Z.
2018-04-01
Chang'e-5, China's first sample return lunar mission, will be launched in 2019, and the planned landing area is near Mons Rümker in Oceanus Procellarum. High-resolution and high-precision mapping of the landing area is of great importance for supporting scientific analysis and safe landing. This paper proposes a systematic method for large area seamless digital orthophoto map (DOM) generation, and presents the mapping result of Chang'e-5 landing area using over 700 LROC NAC images. The developed method mainly consists of two stages of data processing: stage 1 includes subarea block adjustment with rational function model (RFM) and seamless subarea DOM generation; stage 2 includes whole area adjustment through registration of the subarea DOMs with thin plate spline model and seamless DOM mosaicking. The resultant seamless DOM coves a large area (20° longitude × 4° latitude) and is tied to the widely used reference DEM - SLDEM2015. As a result, the RMS errors of the tie points are all around half pixel in image space, indicating a high internal precision; the RMS errors of the control points are about one grid cell size of SLDEM2015, indicating that the resultant DOM is tied to SLDEM2015 well.
Water Stage Forecasting in Tidal streams during High Water Using EEMD
NASA Astrophysics Data System (ADS)
Chen, Yen-Chang; Kao, Su-Pai; Su, Pei-Yi
2017-04-01
There are so many factors may affect the water stages in tidal streams. Not only the ocean wave but also the stream flow affects the water stage in a tidal stream. During high water, two of the most important factors affecting water stages in tidal streams are flood and tide. However the hydrological processes in tidal streams during high water are nonlinear and nonstationary. Generally the conventional methods used for forecasting water stages in tidal streams are very complicated. It explains the accurately forecasting water stages, especially during high water, in tidal streams is always a difficult task. The study makes used of Ensemble Empirical Model Decomposition (EEMD) to analyze the water stages in tidal streams. One of the advantages of the EEMD is it can be used to analyze the nonlinear and nonstationary data. The EEMD divides the water stage into several intrinsic mode functions (IMFs) and a residual; meanwhile, the physical meaning still remains during the process. By comparing the IMF frequency with tidal frequency, it is possible to identify if the IMF is affected by tides. Then the IMFs is separated into two groups, affected by tide or not by tide. The IMFs in each group are assembled to become a factor. Therefore the water stages in tidal streams are only affected by two factors, tidal factor and flood factor. Finally the regression analysis is used to establish the relationship between the factors of the gaging stations in the tidal stream. The available data during 15 typhoon periods of the Tanshui River whose downstream reach is in estuary area is used to illustrate the accuracy and reliability of the proposed method. The results show that the simple but reliable method is capable of forecasting water stages in tidal streams.
Prednisolone and acupuncture in Bell's palsy: study protocol for a randomized, controlled trial
2011-01-01
Background There are a variety of treatment options for Bell's palsy. Evidence from randomized controlled trials indicates corticosteroids can be used as a proven therapy for Bell's palsy. Acupuncture is one of the most commonly used methods to treat Bell's palsy in China. Recent studies suggest that staging treatment is more suitable for Bell's palsy, according to different path-stages of this disease. The aim of this study is to compare the effects of prednisolone and staging acupuncture in the recovery of the affected facial nerve, and to verify whether prednisolone in combination with staging acupuncture is more effective than prednisolone alone for Bell's palsy in a large number of patients. Methods/Design In this article, we report the design and protocol of a large sample multi-center randomized controlled trial to treat Bell's palsy with prednisolone and/or acupuncture. In total, 1200 patients aged 18 to 75 years within 72 h of onset of acute, unilateral, peripheral facial palsy will be assessed. There are six treatment groups, with four treated according to different path-stages and two not. These patients are randomly assigned to be in one of the following six treatment groups, i.e. 1) placebo prednisolone group, 2) prednisolone group, 3) placebo prednisolone plus acute stage acupuncture group, 4) prednisolone plus acute stage acupuncture group, 5) placebo prednisolone plus resting stage acupuncture group, 6) prednisolone plus resting stage acupuncture group. The primary outcome is the time to complete recovery of facial function, assessed by Sunnybrook system and House-Brackmann scale. The secondary outcomes include the incidence of ipsilateral pain in the early stage of palsy (and the duration of this pain), the proportion of patients with severe pain, the occurrence of synkinesis, facial spasm or contracture, and the severity of residual facial symptoms during the study period. Discussion The result of this trial will assess the efficacy of using prednisolone and staging acupuncture to treat Bell's palsy, and to determine a best combination therapy with prednisolone and acupuncture for treating Bell's palsy. Trial Registration ClinicalTrials.gov: NCT01201642 PMID:21693007
Ianni, Federica; Sardella, Roccaldo; Lisanti, Antonella; Gioiello, Antimo; Cenci Goga, Beniamino Terzo; Lindner, Wolfgang; Natalini, Benedetto
2015-12-10
In two-dimensional HPLC (2D-HPLC) "heart-cut" applications, two columns are connected in series via a switching valve and volume fractions from the "primary" column are re-injected on the "secondary" column. The heart-cut 2D-HPLC system here described was implemented by connecting a reversed-phase (RP) column (first dimension) to a chiral column (second dimension) containing a quinidine-based chiral stationary phase. The system was used to evaluate the change in the enantiomeric excess value of dansylated (Dns) amino acids (AAs) in milk samples from two cows with different "California Mastitis Test" scores: negative test for sample 1, positive for sample 2. Apart from the co-elution of Dns-Arg/Dns-Gly and the reduced chemoselectivity for Dns-Leu/Dns-allo-Ile, the optimized achiral RP method distinguished the remaining standard Dns-AAs. Dns-AAs were identified in the chromatograms of the real samples, and in higher concentration Dns-Ala, Dns-Arg, Dns-Asp, Dns-Glu, Dns-Ile, Dns-Leu, Dns-Phe and Dns-Val. Except Dns-Arg, the chiral column enabled the RP enantioseparation of all the other compounds (α and RS values up to 1.65 and 8.63, respectively, for Dns-Phe). In sample 2, the amounts of Dns-d-AAs were rather elevated, in particular for Dns-Ala and Dns-Asp. Instead, for sample 1, D-isomers were detected for Dns-Ala, Dns-Glu and Dns-Leu. The proposed 2D-HPLC method could be useful for the identification of clinical mastitis difficult to be diagnosed. Moreover, the eventual progressive reduction of D-AAs levels with the degree of sub-clinical mastitis could allow the building of mathematical models to use for the diagnosis of early stages of mastitis. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun
2017-11-01
In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.
Fontana, Ariel R; Patil, Sangram H; Banerjee, Kaushik; Altamirano, Jorgelina C
2010-04-28
A fast and effective microextraction technique is proposed for preconcentration of 2,4,6-trichloroanisole (2,4,6-TCA) from wine samples prior gas chromatography tandem mass spectrometric (GC-MS/MS) analysis. The proposed technique is based on ultrasonication (US) for favoring the emulsification phenomenon during the extraction stage. Several variables influencing the relative response of the target analyte were studied and optimized. Under optimal experimental conditions, 2,4,6-TCA was quantitatively extracted achieving enhancement factors (EF) > or = 400 and limits of detection (LODs) 0.6-0.7 ng L(-1) with relative standard deviations (RSDs) < or = 11.3%, when 10 ng L(-1) 2,4,6-TCA standard-wine sample blend was analyzed. The calibration graphs for white and red wine were linear within the range of 5-1000 ng L(-1), and estimation coefficients (r(2)) were > or = 0.9995. Validation of the methodology was carried out by standard addition method at two concentrations (10 and 50 ng L(-1)) achieving recoveries >80% indicating satisfactory robustness of the method. The methodology was successfully applied for determination of 2,4,6-TCA in different wine samples.
Fast Industrial Inspection of Optical Thin Film Using Optical Coherence Tomography
Shirazi, Muhammad Faizan; Park, Kibeom; Wijesinghe, Ruchire Eranga; Jeong, Hyosang; Han, Sangyeob; Kim, Pilun; Jeon, Mansik; Kim, Jeehyun
2016-01-01
An application of spectral domain optical coherence tomography (SD-OCT) was demonstrated for a fast industrial inspection of an optical thin film panel. An optical thin film sample similar to a liquid crystal display (LCD) panel was examined. Two identical SD-OCT systems were utilized for parallel scanning of a complete sample in half time. Dual OCT inspection heads were utilized for transverse (fast) scanning, while a stable linear motorized translational stage was used for lateral (slow) scanning. The cross-sectional and volumetric images of an optical thin film sample were acquired to detect the defects in glass and other layers that are difficult to observe using visual inspection methods. The rapid inspection enabled by this setup led to the early detection of product defects on the manufacturing line, resulting in a significant improvement in the quality assurance of industrial products. PMID:27690043
NASA Astrophysics Data System (ADS)
Young, Li-Hao; Li, Chiao-Hsin; Lin, Ming-Yeng; Hwang, Bing-Fang; Hsu, Hui-Tsung; Chen, Yu-Cheng; Jung, Chau-Ren; Chen, Kuan-Chi; Cheng, Dung-Hung; Wang, Ven-Shing; Chiang, Hung-Che; Tsai, Perng-Jy
2016-11-01
To reduce sampling artifacts and to improve time-resolved measurements of inorganic aerosol system, a recently commercialized semi-continuous In-situ Gas and Aerosol Composition (IGAC) monitoring system was evaluated against a reference annular denuder system (ADS; denuder/two-stage filter pack) at a suburban site over a year, during which the average PM2.5 was 37.0 ± 24.8 μg/m3. A suite of eight ions SO42-, NO3-, Cl-, NH4+, Na+, K+, Ca2+ and Mg2+ and two gases SO2 and NH3 were the target species. In comparison to the reference ADS method, the IGAC performed well in measuring the major ions SO42-, NO3- and NH4+, and the SO2. For those species, the linear slopes, intercepts and R2 values between the two methods all passed the performance evaluation criteria outlined by earlier similar studies. The performance of IGAC on Cl-, Na+, K+ and NH3 was marginally acceptable, whereas Ca2+ and Mg2+ could not be properly evaluated due to the low concentrations (<0.2 μg/m3) and hence inadequate amount of sample size. The ionic balance of the hourly IGAC samples averaged very close to unity, as did the daily ADS samples, though the former was considerably more variable than the latter. The overall performance of the IGAC has been shown to be comparable to other similar monitors and its improvements are discussed.
Stewart, Gavin B.; Altman, Douglas G.; Askie, Lisa M.; Duley, Lelia; Simmonds, Mark C.; Stewart, Lesley A.
2012-01-01
Background Individual participant data (IPD) meta-analyses that obtain “raw” data from studies rather than summary data typically adopt a “two-stage” approach to analysis whereby IPD within trials generate summary measures, which are combined using standard meta-analytical methods. Recently, a range of “one-stage” approaches which combine all individual participant data in a single meta-analysis have been suggested as providing a more powerful and flexible approach. However, they are more complex to implement and require statistical support. This study uses a dataset to compare “two-stage” and “one-stage” models of varying complexity, to ascertain whether results obtained from the approaches differ in a clinically meaningful way. Methods and Findings We included data from 24 randomised controlled trials, evaluating antiplatelet agents, for the prevention of pre-eclampsia in pregnancy. We performed two-stage and one-stage IPD meta-analyses to estimate overall treatment effect and to explore potential treatment interactions whereby particular types of women and their babies might benefit differentially from receiving antiplatelets. Two-stage and one-stage approaches gave similar results, showing a benefit of using anti-platelets (Relative risk 0.90, 95% CI 0.84 to 0.97). Neither approach suggested that any particular type of women benefited more or less from antiplatelets. There were no material differences in results between different types of one-stage model. Conclusions For these data, two-stage and one-stage approaches to analysis produce similar results. Although one-stage models offer a flexible environment for exploring model structure and are useful where across study patterns relating to types of participant, intervention and outcome mask similar relationships within trials, the additional insights provided by their usage may not outweigh the costs of statistical support for routine application in syntheses of randomised controlled trials. Researchers considering undertaking an IPD meta-analysis should not necessarily be deterred by a perceived need for sophisticated statistical methods when combining information from large randomised trials. PMID:23056232
Bansal, Virinder Kumar; Misra, Mahesh C; Rajan, Karthik; Kilambi, Ragini; Kumar, Subodh; Krishna, Asuri; Kumar, Atin; Pandav, Chandrakant S; Subramaniam, Rajeshwari; Arora, M K; Garg, Pramod Kumar
2014-03-01
The ideal method for managing concomitant gallbladder stones and common bile duct (CBD) stones is debatable. The currently preferred method is two-stage endoscopic stone extraction followed by laparoscopic cholecystectomy (LC). This prospective randomized trial compared the success and cost effectiveness of single- and two-stage management of patients with concomitant gallbladder and CBD stones. Consecutive patients with concomitant gallbladder and CBD stones were randomized to either single-stage laparoscopic CBD exploration and cholecystectomy (group 1) or endoscopic retrograde cholangiopancreatography (ERCP) for endoscopic extraction of CBD stones followed by LC (group 2). Success was defined as complete clearance of CBD and cholecystectomy by the intended method. Cost effectiveness was measured using the incremental cost-effectiveness ratio. Intention-to-treat analysis was performed to compare outcomes. From February 2009 to October 2012, 168 patients were randomized: 84 to the single-stage procedure (group 1) and 84 to the two-stage procedure (group 2). Both groups were matched with regard to demographic and clinical parameters. The success rates of laparoscopic CBD exploration and ERCP for clearance of CBD were similar (91.7 vs. 88.1 %). The overall success rate also was comparable: 88.1 % in group 1 and 79.8 % in group 2 (p = 0.20). Direct choledochotomy was performed in 83 of the 84 patients. The mean operative time was significantly longer in group 1 (135.7 ± 36.6 vs. 72.4 ± 27.6 min; p ≤ 0.001), but the overall hospital stay was significantly shorter (4.6 ± 2.4 vs. 5.3 ± 6.2 days; p = 0.03). Group 2 had a significantly greater number of procedures per patient (p < 0.001) and a higher cost (p = 0.002). The two groups did not differ significantly in terms of postoperative wound infection rates or major complications. Single- and two-stage management for uncomplicated concomitant gallbladder and CBD stones had similar success and complication rates, but the single-stage strategy was better in terms of shorter hospital stay, need for fewer procedures, and cost effectiveness.
Moustakas, Aristides; Evans, Matthew R
2015-02-28
Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.
Lu, Jiwen; Erin Liong, Venice; Zhou, Jie
2017-08-09
In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.
Rich, S. S.; Goodarzi, M. O.; Palmer, N. D.; Langefeld, C. D.; Ziegler, J.; Haffner, S. M.; Bryer-Ash, M.; Norris, J. M.; Taylor, K. D.; Haritunians, T.; Rotter, J. I.; Chen, Y-D. I.; Wagenknecht, L. E.; Bowden, D. W.; Bergman, R. N.
2009-01-01
Aims/Hypothesis The goal of this study was to identify genes and regions in the human genome that are associated with the acute insulin response to glucose (AIRg), an important predictor of type 2 diabetes, in Hispanic-American participants from the Insulin Resistance Atherosclerosis Family Study (IRAS FS). Methods A two-stage genome-wide association scan (GWAS) was performed in IRAS FS Hispanic-American samples. In the first stage, 318K single nucleotide polymorphisms (SNPs) were assessed in 229 Hispanic-American DNA samples (from 34 families) from San Antonio, TX. SNPs with the most significant associations with AIRg were genotyped in the entire set of IRAS FS Hispanic-American samples (n = 1190). In chromosomal regions with evidence of association, additional SNPs were genotyped to capture variation in genes. Results No individual SNP achieved genome-wide levels of significance (P < 5 × 10-7); however, two regions — chromosomes 6p21 and 20p11 — had multiple highly-ranked SNPs that were associated with AIRg. Additional genotyping in these regions supported the initial evidence for variants contributing to variation in AIRg. One region resides in a gene desert between PXT1 and KCTD20 on 6p21 while the region on 20p11 has several viable candidate genes (ENTPD6, PYGB, GINS1 and R4-691N24.1). Conclusions/Interpretation A GWAS in Hispanic-American samples identified several candidate genes and loci that may be associated with AIRg. These associations explain a small component of variation in AIRg. The genes identified are involved in phosphorylation and ion transport and provide preliminary evidence that these processes have importance in beta cell response. PMID:19430760
Fiolka, Tom; Dressman, Jennifer
2018-03-01
Various types of two stage in vitro testing have been used in a number of experimental settings. In addition to its application in quality control and for regulatory purposes, two-stage in vitro testing has also been shown to be a valuable technique to evaluate the supersaturation and precipitation behavior of poorly soluble drugs during drug development. The so-called 'transfer model', which is an example of two-stage testing, has provided valuable information about the in vivo performance of poorly soluble, weakly basic drugs by simulating the gastrointestinal drug transit from the stomach into the small intestine with a peristaltic pump. The evolution of the transfer model has resulted in various modifications of the experimental model set-up. Concomitantly, various research groups have developed simplified approaches to two-stage testing to investigate the supersaturation and precipitation behavior of weakly basic drugs without the necessity of using a transfer pump. Given the diversity among the various two-stage test methods available today, a more harmonized approach needs to be taken to optimize the use of two stage testing at different stages of drug development. © 2018 Royal Pharmaceutical Society.
Detection of hepatitis E virus (HEV) through the different stages of pig manure composting plants
García, M; Fernández-Barredo, S; Pérez-Gracia, M T
2014-01-01
Hepatitis E virus (HEV) is an increasing cause of acute hepatitis in industrialized countries. The aim of this study was to evaluate the presence of HEV in pig manure composting plants located in Spain. For this purpose, a total of 594 samples were taken in 54 sampling sessions from the different stages of composting treatment in these plants as follows: slurry reception ponds, anaerobic ponds, aerobic ponds, fermentation zone and composting final products. HEV was detected by reverse transcription polymerase chain reaction (RT-nested PCR) in four (80%) of five plants studied, mainly in the first stages of the process. HEV was not detected in any final product (compost) sample, destined to be commercialized as a soil fertilizer, suggesting that composting is a suitable method to eliminate HEV and thus, to reduce the transmission of HEV from pigs to humans. PMID:24206540
Absolute quantification of DNA methylation using microfluidic chip-based digital PCR.
Wu, Zhenhua; Bai, Yanan; Cheng, Zule; Liu, Fangming; Wang, Ping; Yang, Dawei; Li, Gang; Jin, Qinghui; Mao, Hongju; Zhao, Jianlong
2017-10-15
Hypermethylation of CpG islands in the promoter region of many tumor suppressor genes downregulates their expression and in a result promotes tumorigenesis. Therefore, detection of DNA methylation status is a convenient diagnostic tool for cancer detection. Here, we reported a novel method for the integrative detection of methylation by the microfluidic chip-based digital PCR. This method relies on methylation-sensitive restriction enzyme HpaII, which cleaves the unmethylated DNA strands while keeping the methylated ones intact. After HpaII treatment, the DNA methylation level is determined quantitatively by the microfluidic chip-based digital PCR with the lower limit of detection equal to 0.52%. To validate the applicability of this method, promoter methylation of two tumor suppressor genes (PCDHGB6 and HOXA9) was tested in 10 samples of early stage lung adenocarcinoma and their adjacent non-tumorous tissues. The consistency was observed in the analysis of these samples using our method and a conventional bisulfite pyrosequencing. Combining high sensitivity and low cost, the microfluidic chip-based digital PCR method might provide a promising alternative for the detection of DNA methylation and early diagnosis of epigenetics-related diseases. Copyright © 2017 Elsevier B.V. All rights reserved.
Night sleep electroencephalogram power spectral analysis in excessive daytime sleepiness disorders.
Reimão, R
1991-06-01
A group of 53 patients (40 males, 13 females) with mean age of 49 years, ranging from 30 to 70 years, was evaluated in the following excessive daytime sleepiness (EDS) disorders: obstructive sleep apnea syndrome (B4a), periodic movements in sleep (B5a), affective disorder (B2a), functional psychiatric non affective disorder (B2b). We considered all adult patients referred to the Center sequentially with no other distinctions but these three criteria: (a) EDS was the main complaint; (b) right handed; (c) not using psychotropic drugs for two weeks prior to the all-night polysomnography. EEG (C3/A1, C4/A2) samples from 2 to 10 minutes of each stage of the first REM cycle were chosen. The data was recorded simultaneously in magnetic tape and then fed into a computer for power spectral analysis. The percentage of power (PP) in each band calculated in relation to the total EEG power was determined of subsequent sections of 20.4 s for the following frequency bands: delta, theta, alpha and beta. The PP in all EDS patients sample had a tendency to decrease progressively from the slowest to the fastest frequency bands, in every sleep stage. PP distribution in the delta range increased progressively from stage 1 to stage 4; stage REM levels were close to stage 2 levels. In an EDS patients interhemispheric coherence was high in every band and sleep stage. B4a patients sample PP had a tendency to decrease progressively from the slowest to the fastest frequency bands, in every sleep stage; PP distribution in the delta range increased progressively from stage 1 to stage 4; stage REM levels were between stage 1 and stage 2 levels.(ABSTRACT TRUNCATED AT 250 WORDS)
Label Information Guided Graph Construction for Semi-Supervised Learning.
Zhuang, Liansheng; Zhou, Zihan; Gao, Shenghua; Yin, Jingwen; Lin, Zhouchen; Ma, Yi
2017-09-01
In the literature, most existing graph-based semi-supervised learning methods only use the label information of observed samples in the label propagation stage, while ignoring such valuable information when learning the graph. In this paper, we argue that it is beneficial to consider the label information in the graph learning stage. Specifically, by enforcing the weight of edges between labeled samples of different classes to be zero, we explicitly incorporate the label information into the state-of-the-art graph learning methods, such as the low-rank representation (LRR), and propose a novel semi-supervised graph learning method called semi-supervised low-rank representation. This results in a convex optimization problem with linear constraints, which can be solved by the linearized alternating direction method. Though we take LRR as an example, our proposed method is in fact very general and can be applied to any self-representation graph learning methods. Experiment results on both synthetic and real data sets demonstrate that the proposed graph learning method can better capture the global geometric structure of the data, and therefore is more effective for semi-supervised learning tasks.
CTEPP STANDARD OPERATING PROCEDURE FOR DAY CARE CENTER SAMPLE SUBJECTS RECRUITMENT (SOP-1.11)
The CTEPP subject recruitment procedures for the daycare center component are described in the SOP. There are two stages in this phase of CTEPP subject recruitment. The objective of the first stage is to enroll daycare centers for the study. Six target counties in each state ar...
de Cássia Ramos do Egypto Queiroga, Rita; Costa, Roberto Germano; Madruga, Marta Suely; de Medeiros, Ariosvaldo Nunes; Dos Santos Garruti, Deborah; Magnani, Marciane; de Souza, Evandro Leite
2016-04-01
This study evaluated the influence of lactation stage (early, middle, late) and management practices (milking hygiene and buck presence) on the sensory attributes of Saanen goat milk. Goats were randomly divided in four groups in respect of different milking sanitary procedures and the presence/absence of the buck in the barn. Milk samples were analyzed for sensory attributes including quantitative descriptive analysis (QDA) and acceptance. The milking hygiene practice caused no significant influence on microbiological parameters. Results of QDA revealed that the buck presence increased the characteristic odor of milk at the middle and late lactation stages. The off-odor and off-flavor descriptors showed a distinct response since a higher intensity of these sensory characteristics was noted in the samples obtained from goats maintained without the buck. Odor and flavor contributed most in characterizing the different samples regardless of the management practice and lactation stage. The acceptance of odor showed to be influenced only by the lactation stage, while the acceptance of flavor was only through the presence of the buck. Odor acceptance correlated negatively to off-odor and off-flavor, suggesting that these two sensory attributes impaired the preference for the aroma of the milk samples. © 2015 Japanese Society of Animal Science.
Delayed versus immediate pushing in second stage of labor.
Kelly, Mary; Johnson, Eileen; Lee, Vickie; Massey, Liz; Purser, Debbie; Ring, Karen; Sanderson, Stephanye; Styles, Juanita; Wood, Deb
2010-01-01
Comparison of two different methods for management of second stage of labor: immediate pushing at complete cervical dilation of 10 cm and delayed pushing 90 minutes after complete cervical dilation. This study was a randomized clinical trial in a labor and delivery unit of a not-for-profit community hospital. A sample of 44 nulliparous mothers with continuous epidural anesthesia were studied after random assignment to treatment groups. Subjects were managed with either immediate or delayed pushing during the second stage of labor at the time cervical dilation was complete. The primary outcome measure was the length of pushing during second stage of labor. Secondary outcomes included length of second stage of labor, maternal fatigue and perineal injuries, and fetal heart rate decelerations. Two-tailed, unpaired Student's t-tests and Chi-square analysis were used for data analysis. Level of significance was set at p < .01 following a Bonferroni correction for multiple t-tests. A total of 44 subjects received the study intervention (N = 28 immediate pushing; N = 16 delayed pushing). The delayed pushing group had significantly shorter amount of time spent in pushing compared with the immediate pushing group (38.9 +/- 6.9 vs. 78.7 +/- 7.9 minutes, respectively, p = .002). Maternal fatigue scores, perineal injuries, and fetal heart rate decelerations were similar for both groups. Delaying pushing for up to 90 minutes after complete cervical dilation resulted in a significant decrease in the time mothers spent pushing without a significant increase in total time in second stage of labor.In clinical practice, healthcare providers sometimes resist delaying the onset of pushing after second stage of labor has begun because of a belief it will increase labor time. This study's finding of a 51% reduction in pushing time when mothers delay pushing for up to 90 minutes, with no significant increase in overall time for second stage of labor, disputes that concern.
Beale, David J.; Jones, Oliver A. H.; Karpe, Avinash V.; Dayalan, Saravanan; Oh, Ding Yuan; Kouremenos, Konstantinos A.; Ahmed, Warish; Palombo, Enzo A.
2016-01-01
The application of metabolomics to biological samples has been a key focus in systems biology research, which is aimed at the development of rapid diagnostic methods and the creation of personalized medicine. More recently, there has been a strong focus towards this approach applied to non-invasively acquired samples, such as saliva and exhaled breath. The analysis of these biological samples, in conjunction with other sample types and traditional diagnostic tests, has resulted in faster and more reliable characterization of a range of health disorders and diseases. As the sampling process involved in collecting exhaled breath and saliva is non-intrusive as well as comparatively low-cost and uses a series of widely accepted methods, it provides researchers with easy access to the metabolites secreted by the human body. Owing to its accuracy and rapid nature, metabolomic analysis of saliva and breath (known as salivaomics and breathomics, respectively) is a rapidly growing field and has shown potential to be effective in detecting and diagnosing the early stages of numerous diseases and infections in preclinical studies. This review discusses the various collection and analyses methods currently applied in two of the least used non-invasive sample types in metabolomics, specifically their application in salivaomics and breathomics research. Some of the salient research completed in this field to date is also assessed and discussed in order to provide a basis to advocate their use and possible future scientific directions. PMID:28025547
NASA Astrophysics Data System (ADS)
Da Silva, A. C.; Hladil, J.; Chadimová, L.; Slavík, L.; Hilgen, F. J.; Bábek, O.; Dekkers, M. J.
2016-12-01
The Early Devonian geological time scale (base of the Devonian at 418.8 ± 2.9 Myr, Becker et al., 2012) suffers from poor age control, with associated large uncertainties between 2.5 and 4.2 Myr on the stage boundaries. Identifying orbital cycles from sedimentary successions can serve as a very powerful chronometer to test and, where appropriate, improve age models. Here, we focus on the Lochkovian and Pragian, the two lowermost Devonian stages. High-resolution magnetic susceptibility (χin - 5 to 10 cm sampling interval) and gamma ray spectrometry (GRS - 25 to 50 cm sampling interval) records were gathered from two main limestone sections, Požár-CS (118 m, spanning the Lochkov and Praha Formations) and Pod Barrandovem (174 m; Praha Formation), both in the Czech Republic. An additional section (Branžovy, 65 m, Praha Formation) was sampled for GRS (every 50 cm). The χin and GRS records are very similar, so χin variations are driven by variations in the samples' paramagnetic clay mineral content, reflecting changes in detrital input. Therefore, climatic variations are very likely captured in our records. Multiple spectral analysis and statistical techniques such as: Continuous Wavelet Transform, Evolutive Harmonic Analysis, Multi-taper method and Average Spectral Misfit, were used in concert to reach an optimal astronomical interpretation. The Požár-CS section shows distinctly varying sediment accumulation rates. The Lochkovian (essentially equivalent to the Lochkov Formation (Fm.)) is interpreted to include a total of nineteen 405 kyr eccentricity cycles, constraining its duration to 7.7 ± 2.8 Myr. The Praha Fm. includes fourteen 405 kyr eccentricity cycles in the three sampled sections, while the Pragian Stage only includes about four 405 kyr eccentricity cycles, thus exhibiting durations of 5.7 ± 0.6 Myr and 1.7 ± 0.7 Myr respectively. Because the Lochkov Fm. contains an interval with very low sediment accumulation rate and because the Praha Fm. was cross-validated in three different sections, the uncertainty in the duration of the Lochkov Fm. and the Lochkovian is larger than that of the Praha Fm. and Pragian. The new floating time scales for the Lochkovian and Pragian stages have an unprecedented precision, with reduction in the uncertainty by a factor of 1.7 for the Lochkovian and of ∼6 for the Pragian. Furthermore, longer orbital modulation cycles are also identified with periodicities of ∼1000 kyr and 2000-2500 kyr.
Lestini, Giulia; Dumont, Cyrielle; Mentré, France
2015-01-01
Purpose In this study we aimed to evaluate adaptive designs (ADs) by clinical trial simulation for a pharmacokinetic-pharmacodynamic model in oncology and to compare them with one-stage designs, i.e. when no adaptation is performed, using wrong prior parameters. Methods We evaluated two one-stage designs, ξ0 and ξ*, optimised for prior and true population parameters, Ψ0 and Ψ*, and several ADs (two-, three- and five-stage). All designs had 50 patients. For ADs, the first cohort design was ξ0. The next cohort design was optimised using prior information updated from the previous cohort. Optimal design was based on the determinant of the Fisher information matrix using PFIM. Design evaluation was performed by clinical trial simulations using data simulated from Ψ*. Results Estimation results of two-stage ADs and ξ* were close and much better than those obtained with ξ0. The balanced two-stage AD performed better than two-stage ADs with different cohort sizes. Three-and five-stage ADs were better than two-stage with small first cohort, but not better than the balanced two-stage design. Conclusions Two-stage ADs are useful when prior parameters are unreliable. In case of small first cohort, more adaptations are needed but these designs are complex to implement. PMID:26123680
VNIR hyperspectral background characterization methods in adverse weather conditions
NASA Astrophysics Data System (ADS)
Romano, João M.; Rosario, Dalton; Roth, Luz
2009-05-01
Hyperspectral technology is currently being used by the military to detect regions of interest where potential targets may be located. Weather variability, however, may affect the ability for an algorithm to discriminate possible targets from background clutter. Nonetheless, different background characterization approaches may facilitate the ability for an algorithm to discriminate potential targets over a variety of weather conditions. In a previous paper, we introduced a new autonomous target size invariant background characterization process, the Autonomous Background Characterization (ABC) or also known as the Parallel Random Sampling (PRS) method, features a random sampling stage, a parallel process to mitigate the inclusion by chance of target samples into clutter background classes during random sampling; and a fusion of results at the end. In this paper, we will demonstrate how different background characterization approaches are able to improve performance of algorithms over a variety of challenging weather conditions. By using the Mahalanobis distance as the standard algorithm for this study, we compare the performance of different characterization methods such as: the global information, 2 stage global information, and our proposed method, ABC, using data that was collected under a variety of adverse weather conditions. For this study, we used ARDEC's Hyperspectral VNIR Adverse Weather data collection comprised of heavy, light, and transitional fog, light and heavy rain, and low light conditions.
Chiang, Wen-Jiuh; Chen, Chihchia; Teng, Chiachien; Gu, Jiangjun
2008-03-01
A great deal of progress has been made on information ethics. Which portion is not sufficient? That might be the comparison from countries to countries. The purpose of this study was closely examined using the cross-cultural method for comparison. To determine the ethics cognitions and behaviors of the students, a comprehensive survey was distributed. The questionnaire for the study used Mason's four essential factors in information ethics that included Privacy, Accuracy, Property and Accessibility (PAPA). The samples were comprised of Kaohsiung Taiwan and Nanjing China, junior high school students in 2006. The sample and the survey were obtained from two stages of random sampling that was conducted using an Internet website. Students could read the online questionnaire in the computer laboratory and then send immediate feedback to the website server. The result of the experiment showed the divergence of information ethics in cognition and behavior between Kaohsiung and Nanjing school children. The effects of background and correlation are from cognition and behavior between two regions.
Sample extraction is one of the most important steps in arsenic speciation analysis of solid dietary samples. One of the problem areas in this analysis is the partial extraction of arsenicals from seafood samples. The partial extraction allows the toxicity of the extracted arse...
Werner, S.L.; Johnson, S.M.
1994-01-01
As part of its primary responsibility concerning water as a national resource, the U.S. Geological Survey collects and analyzes samples of ground water and surface water to determine water quality. This report describes the method used since June 1987 to determine selected total-recoverable carbamate pesticides present in water samples. High- performance liquid chromatography is used to separate N-methyl carbamates, N-methyl carbamoyloximes, and an N-phenyl carbamate which have been extracted from water and concentrated in dichloromethane. Analytes, surrogate compounds, and reference compounds are eluted from the analytical column within 25 minutes. Two modes of analyte detection are used: (1) a photodiode-array detector measures and records ultraviolet-absorbance profiles, and (2) a fluorescence detector measures and records fluorescence from an analyte derivative produced when analyte hydrolysis is combined with chemical derivatization. Analytes are identified and confirmed in a three-stage process by use of chromatographic retention time, ultraviolet (UV) spectral comparison, and derivatization/fluorescence detection. Quantitative results are based on the integration of single-wavelength UV-absorbance chromatograms and on comparison with calibration curves derived from external analyte standards that are run with samples as part of an instrumental analytical sequence. Estimated method detection limits vary for each analyte, depending on the sample matrix conditions, and range from 0.5 microgram per liter to as low as 0.01 microgram per liter. Reporting levels for all analytes have been set at 0.5 microgram per liter for this method. Corrections on the basis of percentage recoveries of analytes spiked into distilled water are not applied to values calculated for analyte concentration in samples. These values for analyte concentrations instead indicate the quantities recovered by the method from a particular sample matrix.
August, Gerald J; Piehler, Timothy F; Bloomquist, Michael L
2016-01-01
The development of adaptive treatment strategies (ATS) represents the next step in innovating conduct problems prevention programs within a juvenile diversion context. Toward this goal, we present the theoretical rationale, associated methods, and anticipated challenges for a feasibility pilot study in preparation for implementing a full-scale SMART (i.e., sequential, multiple assignment, randomized trial) for conduct problems prevention. The role of a SMART design in constructing ATS is presented. The SMART feasibility pilot study includes a sample of 100 youth (13-17 years of age) identified by law enforcement as early stage offenders and referred for precourt juvenile diversion programming. Prior data on the sample population detail a high level of ethnic diversity and approximately equal representations of both genders. Within the SMART, youth and their families are first randomly assigned to one of two different brief-type evidence-based prevention programs, featuring parent-focused behavioral management or youth-focused strengths-building components. Youth who do not respond sufficiently to brief first-stage programming will be randomly assigned a second time to either an extended parent- or youth-focused second-stage programming. Measures of proximal intervention response and measures of potential candidate tailoring variables for developing ATS within this sample are detailed. Results of the described pilot study will include information regarding feasibility and acceptability of the SMART design. This information will be used to refine a subsequent full-scale SMART. The use of a SMART to develop ATS for prevention will increase the efficiency and effectiveness of prevention programing for youth with developing conduct problems.
Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey
2009-01-27
The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.
NASA Astrophysics Data System (ADS)
Koettig, T.; Maciocha, W.; Bermudez, S.; Rysti, J.; Tavares, S.; Cacherat, F.; Bremer, J.
2017-02-01
In the framework of the luminosity upgrade of the LHC, high-field magnets are under development. Magnetic flux densities of up to 13 T require the use of Nb3Sn superconducting coils. Quench protection becomes challenging due to the high stored energy density and the low stabilizer fraction. The thermal conductivity and diffusivity of the combination of insulating layers and Nb3Sn based cables are an important thermodynamic input parameter for quench protection systems and superfluid helium cooling studies. A two-stage cryocooler based test stand is used to measure the thermal conductance of the coil sample in two different heat flow directions with respect to the coil package geometry. Variable base temperatures of the experimental platform at the cryocooler allow for a steady-state heat flux method up to 100 K. The heat is applied at wedges style copper interfaces of the Rutherford cables. The respective temperature difference represents the absolute value of thermal conductance of the sample arrangement. We report about the measurement methodology applied to this kind of non-uniform sample composition and the evaluation of the used resin composite materials.
Marín-Sáez, Jesús; Romero-González, Roberto; Garrido Frenich, Antonia
2017-10-06
Tropane alkaloids are a wide group of substances that comprises more than 200 compounds occurring especially in the Solanaceae family. The main aim of this study is the development of a method for the analysis of the principal tropane alkaloids as atropine, scopolamine, anisodamine, tropane, tropine, littorine, homatropine, apoatropine, aposcopolamine, scopoline, tropinone, physoperuvine, pseudotropine and cuscohygrine in cereals and related matrices. For that, a simple solid-liquid extraction was optimized and a liquid chromatographic method coupled to a single stage Exactive-Orbitrap was developed. The method was validated obtaining recoveries in the range of 60-109% (except for some compounds in soy), precision values (expressed as relative standard deviation) lower than 20% and detection and quantification limits equal to or lower than 2 and 3μg/kg respectively. Finally, the method was applied to the analysis of different types of samples as buckwheat, linseed, soy and millet, obtaining positives for anisodamine, scopolamine, atropine, littorine and tropinone in a millet flour sample above the quantification limits, whereas atropine and scopolamine were detected in a buckwheat sample, below the quantification limit. Contaminated samples with Solanaceaes seeds (Datura Stramonium and Brugmansia Arborea) were also analysed, detecting concentrations up to 693μg/kg (scopolamine) for contaminated samples with Brugmansia seeds and 1847μg/kg (atropine) when samples were contaminated with Stramonium seeds. Copyright © 2017 Elsevier B.V. All rights reserved.
Toward cost-efficient sampling methods
NASA Astrophysics Data System (ADS)
Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie
2015-09-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.
Effects of earthworm casts and zeolite on the two-stage composting of green waste.
Zhang, Lu; Sun, Xiangyang
2015-05-01
Because it helps protect the environment and encourages economic development, composting has become a viable method for organic waste disposal. The objective of this study was to investigate the effects of earthworm casts (EWCs) (at 0.0%, 0.30%, and 0.60%) and zeolite (clinoptilolite, CL) (at 0%, 15%, and 25%) on the two-stage composting of green waste. The combination of EWCs and CL improved the conditions of the composting process and the quality of the compost products in terms of the thermophilic phase, humification, nitrification, microbial numbers and enzyme activities, the degradation of cellulose and hemicellulose, and physico-chemical characteristics and nutrient contents of final composts. The compost matured in only 21days with the optimized two-stage composting method rather than in the 90-270days required for traditional composting. The optimal two-stage composting and the best quality compost were obtained with 0.30% EWCs and 25% CL. Copyright © 2015 Elsevier Ltd. All rights reserved.
Chen, Xiaoqin; Li, Ying; Zheng, Hui; Hu, Kaming; Zhang, Hongxing; Zhao, Ling; Li, Yan; Liu, Lian; Mang, Lingling; Yu, Shuyuan
2009-07-01
Acupuncture to treat Bell's palsy is one of the most commonly used methods in China. There are a variety of acupuncture treatment options to treat Bell's palsy in clinical practice. Since Bell's palsy has three different path-stages (acute stage, resting stage and restoration stage), so whether acupuncture is effective in the different path-stages and which acupuncture treatment is the best method are major issues in acupuncture clinical trials about Bell's palsy. In this article, we report the design and protocol of a large sample multi-center randomized controlled trial to treat Bell's palsy with acupuncture. There are five acupuncture groups, with four according to different path-stages and one not. In total, 900 patients with Bell's palsy are enrolled in this study. These patients are randomly assigned to receive one of the following four treatment groups according to different path-stages, i.e. 1) staging acupuncture group, 2) staging acupuncture and moxibustion group, 3) staging electro-acupuncture group, 4) staging acupuncture along yangming musculature group or non-staging acupuncture control group. The outcome measurements in this trial are the effect comparison achieved among these five groups in terms of House-Brackmann scale (Global Score and Regional Score), Facial Disability Index scale, Classification scale of Facial Paralysis, and WHOQOL-BREF scale before randomization (baseline phase) and after randomization. The result of this trial will certify the efficacy of using staging acupuncture and moxibustion to treat Bell's palsy, and to approach a best acupuncture treatment among these five different methods for treating Bell's palsy.
Teaching learning methods of an entrepreneurship curriculum.
Esmi, Keramat; Marzoughi, Rahmatallah; Torkzadeh, Jafar
2015-10-01
One of the most significant elements of entrepreneurship curriculum design is teaching-learning methods, which plays a key role in studies and researches related to such a curriculum. It is the teaching method, and systematic, organized and logical ways of providing lessons that should be consistent with entrepreneurship goals and contents, and should also be developed according to the learners' needs. Therefore, the current study aimed to introduce appropriate, modern, and effective methods of teaching entrepreneurship and their validation. This is a mixed method research of a sequential exploratory kind conducted through two stages: a) developing teaching methods of entrepreneurship curriculum, and b) validating developed framework. Data were collected through "triangulation" (study of documents, investigating theoretical basics and the literature, and semi-structured interviews with key experts). Since the literature on this topic is very rich, and views of the key experts are vast, directed and summative content analysis was used. In the second stage, qualitative credibility of research findings was obtained using qualitative validation criteria (credibility, confirmability, and transferability), and applying various techniques. Moreover, in order to make sure that the qualitative part is reliable, reliability test was used. Moreover, quantitative validation of the developed framework was conducted utilizing exploratory and confirmatory factor analysis methods and Cronbach's alpha. The data were gathered through distributing a three-aspect questionnaire (direct presentation teaching methods, interactive, and practical-operational aspects) with 29 items among 90 curriculum scholars. Target population was selected by means of purposive sampling and representative sample. Results obtained from exploratory factor analysis showed that a three factor structure is an appropriate method for describing elements of teaching-learning methods of entrepreneurship curriculum. Moreover, the value for Kaiser Meyer Olkin measure of sampling adequacy equaled 0.72 and the value for Bartlett's test of variances homogeneity was significant at the 0.0001 level. Except for internship element, the rest had a factor load of higher than 0.3. Also, the results of confirmatory factor analysis showed the model appropriateness, and the criteria for qualitative accreditation were acceptable. Developed model can help instructors in selecting an appropriate method of entrepreneurship teaching, and it can also make sure that the teaching is on the right path. Moreover, the model is comprehensive and includes all the effective teaching methods in entrepreneurship education. It is also based on qualities, conditions, and requirements of Higher Education Institutions in Iranian cultural environment.
Electrolytic cell-free 57Co deposition for emission Mössbauer spectroscopy
NASA Astrophysics Data System (ADS)
Zyabkin, Dmitry V.; Procházka, Vít; Miglierini, Marcel; Mašláň, Miroslav
2018-05-01
We have developed a simple, inexpensive and efficient method for an electrochemical preparation of samples for emission Mössbauer spectroscopy (EMS) and Mössbauer sources. The proposed electrolytic deposition procedure does not require any special setup, not even an electrolytic cell. It utilizes solely an electrode with a droplet of electrolyte on its surface and the second electrode sunk into the droplet. Its performance is demonstrated using two examples, a metallic glass and a Cu stripe. We present a detailed description of the deposition procedure and resulting emission Mössbauer spectra for both samples. In the case of a Cu stripe, we have performed EMS measurements at different stages of heat-treatment, which are required for the production of Mössbauer sources with the copper matrix.
Helmel, Michaela; Marchetti-Deschmann, Martina; Raus, Martin; Posch, Andreas E; Herwig, Christoph; Šebela, Marek; Allmaier, Günter
2015-02-01
Penicillin production during a fermentation process using industrial strains of Penicillium chrysogenum is a research topic permanently discussed since the accidental discovery of the antibiotic. Intact cell mass spectrometry (ICMS) can be a fast and novel monitoring tool for the fermentation progress during penicillin V production in a nearly real-time fashion. This method is already used for the characterization of microorganisms and the differentiation of fungal strains; therefore, the application of ICMS to samples directly harvested from a fermenter is a promising possibility to get fast information about the progress of fungal growth. After the optimization of the ICMS method to penicillin V fermentation broth samples, the obtained ICMS data were evaluated by hierarchical cluster analysis or an in-house software solution written especially for ICMS data comparison. Growth stages of a batch and fed-batch fermentation of Penicillium chrysogenum are differentiated by one of those statistical approaches. The application of two matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) instruments in the linear positive ion mode from different vendors demonstrated the universal applicability of the developed ICMS method. The base for a fast and easy-to-use method for monitoring the fermentation progress of P. chrysogenum is created with this ICMS method developed especially for fermentation broth samples. Copyright © 2014 Elsevier Inc. All rights reserved.
Lo Presti, Rossella; Barca, Emanuele; Passarella, Giuseppe
2010-01-01
Environmental time series are often affected by the "presence" of missing data, but when dealing statistically with data, the need to fill in the gaps estimating the missing values must be considered. At present, a large number of statistical techniques are available to achieve this objective; they range from very simple methods, such as using the sample mean, to very sophisticated ones, such as multiple imputation. A brand new methodology for missing data estimation is proposed, which tries to merge the obvious advantages of the simplest techniques (e.g. their vocation to be easily implemented) with the strength of the newest techniques. The proposed method consists in the application of two consecutive stages: once it has been ascertained that a specific monitoring station is affected by missing data, the "most similar" monitoring stations are identified among neighbouring stations on the basis of a suitable similarity coefficient; in the second stage, a regressive method is applied in order to estimate the missing data. In this paper, four different regressive methods are applied and compared, in order to determine which is the most reliable for filling in the gaps, using rainfall data series measured in the Candelaro River Basin located in South Italy.
An uncertainty analysis of the flood-stage upstream from a bridge.
Sowiński, M
2006-01-01
The paper begins with the formulation of the problem in the form of a general performance function. Next the Latin hypercube sampling (LHS) technique--a modified version of the Monte Carlo method is briefly described. The essential uncertainty analysis of the flood-stage upstream from a bridge starts with a description of the hydraulic model. This model concept is based on the HEC-RAS model developed for subcritical flow under a bridge without piers in which the energy equation is applied. The next section contains the characteristic of the basic variables including a specification of their statistics (means and variances). Next the problem of correlated variables is discussed and assumptions concerning correlation among basic variables are formulated. The analysis of results is based on LHS ranking lists obtained from the computer package UNCSAM. Results fot two examples are given: one for independent and the other for correlated variables.