Sample records for efficient two-stage approach

  1. Study on a cascade pulse tube cooler with energy recovery: new method for approaching Carnot

    NASA Astrophysics Data System (ADS)

    Wang, L. Y.; Wu, M.; Zhu, J. K.; Jin, Z. Y.; Sun, X.; Gan, Z. H.

    2015-12-01

    A pulse tube cryocooler (PTC) can not achieve Carnot efficiency because the expansion work must be dissipated at the warm end of the pulse tube. How to recover this amount of dissipated work is a key for improving the PTC efficiency. A cascade PTC consists of PTCs those are staged by transmission tubes in between, these can be a two-stage or even more stages, each stage is driven by the recovered work from the last stage by a well-designed long transmission tube. It is shown that the more stages it has, the closer the efficiency will approach the Carnot efficiency. A two-stage cascade pulse tube cooler consisted of a primary and a secondary stage working at 233 K is designed, fabricated and tested in our lab. Experimental results show that the efficiency is improved by 33% compared with the single stage PTC.

  2. Experimental study of wood downdraft gasification for an improved producer gas quality through an innovative two-stage air and premixed air/gas supply approach.

    PubMed

    Jaojaruek, Kitipong; Jarungthammachote, Sompop; Gratuito, Maria Kathrina B; Wongsuwan, Hataitep; Homhual, Suwan

    2011-04-01

    This study conducted experiments on three different downdraft gasification approaches: single stage, conventional two-stage, and an innovative two-stage air and premixed air/gas supply approach. The innovative two-stage approach has two nozzle locations, one for air supply at combustion zone and the other located at the pyrolysis zone for supplying the premixed gas (air and producer gas). The producer gas is partially bypassed to mix with air and supplied to burn at the pyrolysis zone. The result shows that producer gas quality generated by the innovative two-stage approach improved as compared to conventional two-stage. The higher heating value (HHV) increased from 5.4 to 6.5 MJ/Nm(3). Tar content in producer gas reduced to less than 45 mg/Nm(3). With this approach, gas can be fed directly to an internal combustion engine. Furthermore, the gasification thermal efficiency also improved by approximately 14%. The approach gave double benefits on gas qualities and energy savings. Copyright © 2010 Elsevier Ltd. All rights reserved.

  3. Assessing efficiency and effectiveness of Malaysian Islamic banks: A two stage DEA analysis

    NASA Astrophysics Data System (ADS)

    Kamarudin, Norbaizura; Ismail, Wan Rosmanira; Mohd, Muhammad Azri

    2014-06-01

    Islamic banks in Malaysia are indispensable players in the financial industry with the growing needs for syariah compliance system. In the banking industry, most recent studies concerned only on operational efficiency. However rarely on the operational effectiveness. Since the production process of banking industry can be described as a two-stage process, two-stage Data Envelopment Analysis (DEA) can be applied to measure the bank performance. This study was designed to measure the overall performance in terms of efficiency and effectiveness of Islamic banks in Malaysia using Two-Stage DEA approach. This paper presents analysis of a DEA model which split the efficiency and effectiveness in order to evaluate the performance of ten selected Islamic Banks in Malaysia for the financial year period ended 2011. The analysis shows average efficient score is more than average effectiveness score thus we can say that Malaysian Islamic banks were more efficient rather than effective. Furthermore, none of the bank exhibit best practice in both stages as we can say that a bank with better efficiency does not always mean having better effectiveness at the same time.

  4. Determinants of School Efficiency: The Case of Primary Schools in the State of Geneva, Switzerland

    ERIC Educational Resources Information Center

    Huguenin, Jean-Marc

    2015-01-01

    Purpose: The purpose of this paper is: to measure school technical efficiency and to identify the determinants of primary school performance. Design/Methodology/Approach: A two-stage data envelopment analysis (DEA) of school efficiency is conducted. At the first stage, DEA is employed to calculate an individual efficiency score for each school. At…

  5. Did There Exist Two Stages of Franklin Bobbitt's Curriculum Theory?

    ERIC Educational Resources Information Center

    Liu, Xing

    2017-01-01

    Franklin Bobbitt is the founder of modern curriculum theory. There is a generally supported saying that Bobbitt's theory went through two stages, the first focused on social efficiency with a mechanical and behavioral approach, and the second a more progressive approach, caring for the living experience of pupils. A close reading of his so-called…

  6. Using extreme phenotype sampling to identify the rare causal variants of quantitative traits in association studies.

    PubMed

    Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J; Murcray, Cassandra Elizabeth; Conti, David

    2011-12-01

    Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. © 2011 Wiley Periodicals, Inc.

  7. Using Extreme Phenotype Sampling to Identify the Rare Causal Variants of Quantitative Traits in Association Studies

    PubMed Central

    Li, Dalin; Lewinger, Juan Pablo; Gauderman, William J.; Murcray, Cassandra Elizabeth; Conti, David

    2014-01-01

    Variants identified in recent genome-wide association studies based on the common-disease common-variant hypothesis are far from fully explaining the hereditability of complex traits. Rare variants may, in part, explain some of the missing hereditability. Here, we explored the advantage of the extreme phenotype sampling in rare-variant analysis and refined this design framework for future large-scale association studies on quantitative traits. We first proposed a power calculation approach for a likelihood-based analysis method. We then used this approach to demonstrate the potential advantages of extreme phenotype sampling for rare variants. Next, we discussed how this design can influence future sequencing-based association studies from a cost-efficiency (with the phenotyping cost included) perspective. Moreover, we discussed the potential of a two-stage design with the extreme sample as the first stage and the remaining nonextreme subjects as the second stage. We demonstrated that this two-stage design is a cost-efficient alternative to the one-stage cross-sectional design or traditional two-stage design. We then discussed the analysis strategies for this extreme two-stage design and proposed a corresponding design optimization procedure. To address many practical concerns, for example measurement error or phenotypic heterogeneity at the very extremes, we examined an approach in which individuals with very extreme phenotypes are discarded. We demonstrated that even with a substantial proportion of these extreme individuals discarded, an extreme-based sampling can still be more efficient. Finally, we expanded the current analysis and design framework to accommodate the CMC approach where multiple rare-variants in the same gene region are analyzed jointly. PMID:21922541

  8. Enhanced energy conversion efficiency from high strength synthetic organic wastewater by sequential dark fermentative hydrogen production and algal lipid accumulation.

    PubMed

    Ren, Hong-Yu; Liu, Bing-Feng; Kong, Fanying; Zhao, Lei; Xing, Defeng; Ren, Nan-Qi

    2014-04-01

    A two-stage process of sequential dark fermentative hydrogen production and microalgal cultivation was applied to enhance the energy conversion efficiency from high strength synthetic organic wastewater. Ethanol fermentation bacterium Ethanoligenens harbinense B49 was used as hydrogen producer, and the energy conversion efficiency and chemical oxygen demand (COD) removal efficiency reached 18.6% and 28.3% in dark fermentation. Acetate was the main soluble product in dark fermentative effluent, which was further utilized by microalga Scenedesmus sp. R-16. The final algal biomass concentration reached 1.98gL(-1), and the algal biomass was rich in lipid (40.9%) and low in protein (23.3%) and carbohydrate (11.9%). Compared with single dark fermentation stage, the energy conversion efficiency and COD removal efficiency of two-stage system remarkably increased 101% and 131%, respectively. This research provides a new approach for efficient energy production and wastewater treatment using a two-stage process combining dark fermentation and algal cultivation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. A two-stage DEA approach for environmental efficiency measurement.

    PubMed

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  10. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    PubMed

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  11. Combining evidence from multiple electronic health care databases: performances of one-stage and two-stage meta-analysis in matched case-control studies.

    PubMed

    La Gamba, Fabiola; Corrao, Giovanni; Romio, Silvana; Sturkenboom, Miriam; Trifirò, Gianluca; Schink, Tania; de Ridder, Maria

    2017-10-01

    Clustering of patients in databases is usually ignored in one-stage meta-analysis of multi-database studies using matched case-control data. The aim of this study was to compare bias and efficiency of such a one-stage meta-analysis with a two-stage meta-analysis. First, we compared the approaches by generating matched case-control data under 5 simulated scenarios, built by varying: (1) the exposure-outcome association; (2) its variability among databases; (3) the confounding strength of one covariate on this association; (4) its variability; and (5) the (heterogeneous) confounding strength of two covariates. Second, we made the same comparison using empirical data from the ARITMO project, a multiple database study investigating the risk of ventricular arrhythmia following the use of medications with arrhythmogenic potential. In our study, we specifically investigated the effect of current use of promethazine. Bias increased for one-stage meta-analysis with increasing (1) between-database variance of exposure effect and (2) heterogeneous confounding generated by two covariates. The efficiency of one-stage meta-analysis was slightly lower than that of two-stage meta-analysis for the majority of investigated scenarios. Based on ARITMO data, there were no evident differences between one-stage (OR = 1.50, CI = [1.08; 2.08]) and two-stage (OR = 1.55, CI = [1.12; 2.16]) approaches. When the effect of interest is heterogeneous, a one-stage meta-analysis ignoring clustering gives biased estimates. Two-stage meta-analysis generates estimates at least as accurate and precise as one-stage meta-analysis. However, in a study using small databases and rare exposures and/or outcomes, a correct one-stage meta-analysis becomes essential. Copyright © 2017 John Wiley & Sons, Ltd.

  12. Hierarchical screening for multiple mental disorders.

    PubMed

    Batterham, Philip J; Calear, Alison L; Sunderland, Matthew; Carragher, Natacha; Christensen, Helen; Mackinnon, Andrew J

    2013-10-01

    There is a need for brief, accurate screening when assessing multiple mental disorders. Two-stage hierarchical screening, consisting of brief pre-screening followed by a battery of disorder-specific scales for those who meet diagnostic criteria, may increase the efficiency of screening without sacrificing precision. This study tested whether more efficient screening could be gained using two-stage hierarchical screening than by administering multiple separate tests. Two Australian adult samples (N=1990) with high rates of psychopathology were recruited using Facebook advertising to examine four methods of hierarchical screening for four mental disorders: major depressive disorder, generalised anxiety disorder, panic disorder and social phobia. Using K6 scores to determine whether full screening was required did not increase screening efficiency. However, pre-screening based on two decision tree approaches or item gating led to considerable reductions in the mean number of items presented per disorder screened, with estimated item reductions of up to 54%. The sensitivity of these hierarchical methods approached 100% relative to the full screening battery. Further testing of the hierarchical screening approach based on clinical criteria and in other samples is warranted. The results demonstrate that a two-phase hierarchical approach to screening multiple mental disorders leads to considerable increases efficiency gains without reducing accuracy. Screening programs should take advantage of prescreeners based on gating items or decision trees to reduce the burden on respondents. © 2013 Elsevier B.V. All rights reserved.

  13. A Bayesian Approach for Evaluation of Determinants of Health System Efficiency Using Stochastic Frontier Analysis and Beta Regression.

    PubMed

    Şenel, Talat; Cengiz, Mehmet Ali

    2016-01-01

    In today's world, Public expenditures on health are one of the most important issues for governments. These increased expenditures are putting pressure on public budgets. Therefore, health policy makers have focused on the performance of their health systems and many countries have introduced reforms to improve the performance of their health systems. This study investigates the most important determinants of healthcare efficiency for OECD countries using second stage approach for Bayesian Stochastic Frontier Analysis (BSFA). There are two steps in this study. First we measure 29 OECD countries' healthcare efficiency by BSFA using the data from the OECD Health Database. At second stage, we expose the multiple relationships between the healthcare efficiency and characteristics of healthcare systems across OECD countries using Bayesian beta regression.

  14. A Two-Stage Approach for Medical Supplies Intermodal Transportation in Large-Scale Disaster Responses

    PubMed Central

    Ruan, Junhu; Wang, Xuping; Shi, Yan

    2014-01-01

    We present a two-stage approach for the “helicopters and vehicles” intermodal transportation of medical supplies in large-scale disaster responses. In the first stage, a fuzzy-based method and its heuristic algorithm are developed to select the locations of temporary distribution centers (TDCs) and assign medial aid points (MAPs) to each TDC. In the second stage, an integer-programming model is developed to determine the delivery routes. Numerical experiments verified the effectiveness of the approach, and observed several findings: (i) More TDCs often increase the efficiency and utility of medical supplies; (ii) It is not definitely true that vehicles should load more and more medical supplies in emergency responses; (iii) The more contrasting the traveling speeds of helicopters and vehicles are, the more advantageous the intermodal transportation is. PMID:25350005

  15. Prevention and control of blood stream infection using the balanced scorecard approach.

    PubMed

    Rohsiswatmo, Rinawati; Rafika, Sarah; Marsubrin, Putri M T

    2014-07-01

    to obtain formulation of an effective and efficient strategy to overcome blood stream infection (BSI). operational research design with qualitative and quantitative approach. The study was divided into two stages. Stage I was an operational research with problem solving approach using qualitative and quantitative method. Stage II was performed using quantitative method, a form of an interventional study on strategy implementation, which was previously established in stage I. The effective and efficient strategy for the prevention and control of infection in neonatal unit Cipto Mangunkusumo (CM) Hospital was established using Balanced Scorecard (BSC) approach, which involved several related processes. the BSC strategy was proven to be effective and efficient in substantially reducing BSI from 52.31°/oo to 1.36°/oo in neonates with birth weight (BW) 1000-1499 g (p=0.025), and from 29.96°/oo to 1.66°/oo in BW 1500-1999 g (p=0.05). Gram-negative bacteria still predominated as the main cause of BSI in CMH Neonatal Unit. So far, the sources of the microorganisms were thought to be from the environment of treatment unit (tap water filter and humidifying water in the incubator). Significant reduction was also found in neonatal mortality rate weighing 1000-1499 g at birth, length of stay, hospitalization costs, and improved customer satisfaction. effective and efficient infection prevention and control using BSC approach could significantly reduce the rate of BSI. This approach may be applied for adult patients in intensive care unit with a wide range of adjustment.

  16. Efficiently Identifying Significant Associations in Genome-wide Association Studies

    PubMed Central

    Eskin, Eleazar

    2013-01-01

    Abstract Over the past several years, genome-wide association studies (GWAS) have implicated hundreds of genes in common disease. More recently, the GWAS approach has been utilized to identify regions of the genome that harbor variation affecting gene expression or expression quantitative trait loci (eQTLs). Unlike GWAS applied to clinical traits, where only a handful of phenotypes are analyzed per study, in eQTL studies, tens of thousands of gene expression levels are measured, and the GWAS approach is applied to each gene expression level. This leads to computing billions of statistical tests and requires substantial computational resources, particularly when applying novel statistical methods such as mixed models. We introduce a novel two-stage testing procedure that identifies all of the significant associations more efficiently than testing all the single nucleotide polymorphisms (SNPs). In the first stage, a small number of informative SNPs, or proxies, across the genome are tested. Based on their observed associations, our approach locates the regions that may contain significant SNPs and only tests additional SNPs from those regions. We show through simulations and analysis of real GWAS datasets that the proposed two-stage procedure increases the computational speed by a factor of 10. Additionally, efficient implementation of our software increases the computational speed relative to the state-of-the-art testing approaches by a factor of 75. PMID:24033261

  17. A Two-Stage Estimation Method for Random Coefficient Differential Equation Models with Application to Longitudinal HIV Dynamic Data.

    PubMed

    Fang, Yun; Wu, Hulin; Zhu, Li-Xing

    2011-07-01

    We propose a two-stage estimation method for random coefficient ordinary differential equation (ODE) models. A maximum pseudo-likelihood estimator (MPLE) is derived based on a mixed-effects modeling approach and its asymptotic properties for population parameters are established. The proposed method does not require repeatedly solving ODEs, and is computationally efficient although it does pay a price with the loss of some estimation efficiency. However, the method does offer an alternative approach when the exact likelihood approach fails due to model complexity and high-dimensional parameter space, and it can also serve as a method to obtain the starting estimates for more accurate estimation methods. In addition, the proposed method does not need to specify the initial values of state variables and preserves all the advantages of the mixed-effects modeling approach. The finite sample properties of the proposed estimator are studied via Monte Carlo simulations and the methodology is also illustrated with application to an AIDS clinical data set.

  18. Optimizing Parameters of Axial Pressure-Compounded Ultra-Low Power Impulse Turbines at Preliminary Design

    NASA Astrophysics Data System (ADS)

    Kalabukhov, D. S.; Radko, V. M.; Grigoriev, V. A.

    2018-01-01

    Ultra-low power turbine drives are used as energy sources in auxiliary power systems, energy units, terrestrial, marine, air and space transport within the confines of shaft power N td = 0.01…10 kW. In this paper we propose a new approach to the development of surrogate models for evaluating the integrated efficiency of multistage ultra-low power impulse turbine with pressure stages. This method is based on the use of existing mathematical models of ultra-low power turbine stage efficiency and mass. It has been used in a method for selecting the rational parameters of two-stage axial ultra-low power turbine. The article describes the basic features of an algorithm for two-stage turbine parameters optimization and for efficiency criteria evaluating. Pledged mathematical models are intended for use at the preliminary design of turbine drive. The optimization method was tested at preliminary design of an air starter turbine. Validation was carried out by comparing the results of optimization calculations and numerical gas-dynamic simulation in the Ansys CFX package. The results indicate a sufficient accuracy of used surrogate models for axial two-stage turbine parameters selection

  19. Efficient design of gain-flattened multi-pump Raman fiber amplifiers using least squares support vector regression

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Qiu, Xiaojie; Yin, Cunyi; Jiang, Hao

    2018-02-01

    An efficient method to design the broadband gain-flattened Raman fiber amplifier with multiple pumps is proposed based on least squares support vector regression (LS-SVR). A multi-input multi-output LS-SVR model is introduced to replace the complicated solving process of the nonlinear coupled Raman amplification equation. The proposed approach contains two stages: offline training stage and online optimization stage. During the offline stage, the LS-SVR model is trained. Owing to the good generalization capability of LS-SVR, the net gain spectrum can be directly and accurately obtained when inputting any combination of the pump wavelength and power to the well-trained model. During the online stage, we incorporate the LS-SVR model into the particle swarm optimization algorithm to find the optimal pump configuration. The design results demonstrate that the proposed method greatly shortens the computation time and enhances the efficiency of the pump parameter optimization for Raman fiber amplifier design.

  20. Core compressor exit stage study. Volume 2: Data and performance report for the baseline configuration

    NASA Technical Reports Server (NTRS)

    Wisler, D. C.

    1980-01-01

    The objective of the program is to develop rear stage blading designs that have lower losses in their endwall boundary layer regions. The overall technical approach in this efficiency improvement program utilized General Electric's Low Speed Research Compressor as the principal investigative tool. Tests were conducted in two ways: using four identical stages of blading so that test data would be obtained in a true multistage environment and using a single stage of blading so that comparison with the multistage test results could be made.

  1. Does integration of HIV and sexual and reproductive health services improve technical efficiency in Kenya and Swaziland? An application of a two-stage semi parametric approach incorporating quality measures

    PubMed Central

    Obure, Carol Dayo; Jacobs, Rowena; Guinness, Lorna; Mayhew, Susannah; Vassall, Anna

    2016-01-01

    Theoretically, integration of vertically organized services is seen as an important approach to improving the efficiency of health service delivery. However, there is a dearth of evidence on the effect of integration on the technical efficiency of health service delivery. Furthermore, where technical efficiency has been assessed, there have been few attempts to incorporate quality measures within efficiency measurement models particularly in sub-Saharan African settings. This paper investigates the technical efficiency and the determinants of technical efficiency of integrated HIV and sexual and reproductive health (SRH) services using data collected from 40 health facilities in Kenya and Swaziland for 2008/2009 and 2010/2011. Incorporating a measure of quality, we estimate the technical efficiency of health facilities and explore the effect of integration and other environmental factors on technical efficiency using a two-stage semi-parametric double bootstrap approach. The empirical results reveal a high degree of inefficiency in the health facilities studied. The mean bias corrected technical efficiency scores taking quality into consideration varied between 22% and 65% depending on the data envelopment analysis (DEA) model specification. The number of additional HIV services in the maternal and child health unit, public ownership and facility type, have a positive and significant effect on technical efficiency. However, number of additional HIV and STI services provided in the same clinical room, proportion of clinical staff to overall staff, proportion of HIV services provided, and rural location had a negative and significant effect on technical efficiency. The low estimates of technical efficiency and mixed effects of the measures of integration on efficiency challenge the notion that integration of HIV and SRH services may substantially improve the technical efficiency of health facilities. The analysis of quality and efficiency as separate dimensions of performance suggest that efficiency may be achieved without sacrificing quality. PMID:26803655

  2. Does integration of HIV and sexual and reproductive health services improve technical efficiency in Kenya and Swaziland? An application of a two-stage semi parametric approach incorporating quality measures.

    PubMed

    Obure, Carol Dayo; Jacobs, Rowena; Guinness, Lorna; Mayhew, Susannah; Vassall, Anna

    2016-02-01

    Theoretically, integration of vertically organized services is seen as an important approach to improving the efficiency of health service delivery. However, there is a dearth of evidence on the effect of integration on the technical efficiency of health service delivery. Furthermore, where technical efficiency has been assessed, there have been few attempts to incorporate quality measures within efficiency measurement models particularly in sub-Saharan African settings. This paper investigates the technical efficiency and the determinants of technical efficiency of integrated HIV and sexual and reproductive health (SRH) services using data collected from 40 health facilities in Kenya and Swaziland for 2008/2009 and 2010/2011. Incorporating a measure of quality, we estimate the technical efficiency of health facilities and explore the effect of integration and other environmental factors on technical efficiency using a two-stage semi-parametric double bootstrap approach. The empirical results reveal a high degree of inefficiency in the health facilities studied. The mean bias corrected technical efficiency scores taking quality into consideration varied between 22% and 65% depending on the data envelopment analysis (DEA) model specification. The number of additional HIV services in the maternal and child health unit, public ownership and facility type, have a positive and significant effect on technical efficiency. However, number of additional HIV and STI services provided in the same clinical room, proportion of clinical staff to overall staff, proportion of HIV services provided, and rural location had a negative and significant effect on technical efficiency. The low estimates of technical efficiency and mixed effects of the measures of integration on efficiency challenge the notion that integration of HIV and SRH services may substantially improve the technical efficiency of health facilities. The analysis of quality and efficiency as separate dimensions of performance suggest that efficiency may be achieved without sacrificing quality. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  3. An efficient dictionary learning algorithm and its application to 3-D medical image denoising.

    PubMed

    Li, Shutao; Fang, Leyuan; Yin, Haitao

    2012-02-01

    In this paper, we propose an efficient dictionary learning algorithm for sparse representation of given data and suggest a way to apply this algorithm to 3-D medical image denoising. Our learning approach is composed of two main parts: sparse coding and dictionary updating. On the sparse coding stage, an efficient algorithm named multiple clusters pursuit (MCP) is proposed. The MCP first applies a dictionary structuring strategy to cluster the atoms with high coherence together, and then employs a multiple-selection strategy to select several competitive atoms at each iteration. These two strategies can greatly reduce the computation complexity of the MCP and assist it to obtain better sparse solution. On the dictionary updating stage, the alternating optimization that efficiently approximates the singular value decomposition is introduced. Furthermore, in the 3-D medical image denoising application, a joint 3-D operation is proposed for taking the learning capabilities of the presented algorithm to simultaneously capture the correlations within each slice and correlations across the nearby slices, thereby obtaining better denoising results. The experiments on both synthetically generated data and real 3-D medical images demonstrate that the proposed approach has superior performance compared to some well-known methods. © 2011 IEEE

  4. Impedance-matched Marx generators

    NASA Astrophysics Data System (ADS)

    Stygar, W. A.; LeChien, K. R.; Mazarakis, M. G.; Savage, M. E.; Stoltzfus, B. S.; Austin, K. N.; Breden, E. W.; Cuneo, M. E.; Hutsel, B. T.; Lewis, S. A.; McKee, G. R.; Moore, J. K.; Mulville, T. D.; Muron, D. J.; Reisman, D. B.; Sceiford, M. E.; Wisher, M. L.

    2017-04-01

    We have conceived a new class of prime-power sources for pulsed-power accelerators: impedance-matched Marx generators (IMGs). The fundamental building block of an IMG is a brick, which consists of two capacitors connected electrically in series with a single switch. An IMG comprises a single stage or several stages distributed axially and connected in series. Each stage is powered by a single brick or several bricks distributed azimuthally within the stage and connected in parallel. The stages of a multistage IMG drive an impedance-matched coaxial transmission line with a conical center conductor. When the stages are triggered sequentially to launch a coherent traveling wave along the coaxial line, the IMG achieves electromagnetic-power amplification by triggered emission of radiation. Hence a multistage IMG is a pulsed-power analogue of a laser. To illustrate the IMG approach to prime power, we have developed conceptual designs of two ten-stage IMGs with L C time constants on the order of 100 ns. One design includes 20 bricks per stage, and delivers a peak electrical power of 1.05 TW to a matched-impedance 1.22 -Ω load. The design generates 113 kV per stage and has a maximum energy efficiency of 89%. The other design includes a single brick per stage, delivers 68 GW to a matched-impedance 19 -Ω load, generates 113 kV per stage, and has a maximum energy efficiency of 90%. For a given electrical-power-output time history, an IMG is less expensive and slightly more efficient than a linear transformer driver, since an IMG does not use ferromagnetic cores.

  5. Multidisciplinary Multiobjective Optimal Design for Turbomachinery Using Evolutionary Algorithm

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This report summarizes Dr. Lian s efforts toward developing a robust and efficient tool for multidisciplinary and multi-objective optimal design for turbomachinery using evolutionary algorithms. This work consisted of two stages. The first stage (from July 2003 to June 2004) Dr. Lian focused on building essential capabilities required for the project. More specifically, Dr. Lian worked on two subjects: an enhanced genetic algorithm (GA) and an integrated optimization system with a GA and a surrogate model. The second stage (from July 2004 to February 2005) Dr. Lian formulated aerodynamic optimization and structural optimization into a multi-objective optimization problem and performed multidisciplinary and multi-objective optimizations on a transonic compressor blade based on the proposed model. Dr. Lian s numerical results showed that the proposed approach can effectively reduce the blade weight and increase the stage pressure ratio in an efficient manner. In addition, the new design was structurally safer than the original design. Five conference papers and three journal papers were published on this topic by Dr. Lian.

  6. A practical model for the train-set utilization: The case of Beijing-Tianjin passenger dedicated line in China

    PubMed Central

    Li, Xiaomeng; Yang, Zhuo

    2017-01-01

    As a sustainable transportation mode, high-speed railway (HSR) has become an efficient way to meet the huge travel demand. However, due to the high acquisition and maintenance cost, it is impossible to build enough infrastructure and purchase enough train-sets. Great efforts are required to improve the transport capability of HSR. The utilization efficiency of train-sets (carrying tools of HSR) is one of the most important factors of the transport capacity of HSR. In order to enhance the utilization efficiency of the train-sets, this paper proposed a train-set circulation optimization model to minimize the total connection time. An innovative two-stage approach which contains segments generation and segments combination was designed to solve this model. In order to verify the feasibility of the proposed approach, an experiment was carried out in the Beijing-Tianjin passenger dedicated line, to fulfill a 174 trips train diagram. The model results showed that compared with the traditional Ant Colony Algorithm (ACA), the utilization efficiency of train-sets can be increased from 43.4% (ACA) to 46.9% (Two-Stage), and 1 train-set can be saved up to fulfill the same transportation tasks. The approach proposed in the study is faster and more stable than the traditional ones, by using which, the HSR staff can draw up the train-sets circulation plan more quickly and the utilization efficiency of the HSR system is also improved. PMID:28489933

  7. A high power, continuous-wave, single-frequency fiber amplifier at 1091 nm and frequency doubling to 545.5 nm

    NASA Astrophysics Data System (ADS)

    Stappel, M.; Steinborn, R.; Kolbe, D.; Walz, J.

    2013-07-01

    We present a high power single-frequency ytterbium fiber amplifier system with an output power of 30 W at 1091 nm. The amplifier system consists of two stages, a preamplifier stage in which amplified spontaneous emission is efficiently suppressed (>40 dB) and a high power amplifier with an efficiency of 52%. Two different approaches to frequency doubling are compared. We achieve 8.6 W at 545.5 nm by single-pass frequency doubling in a MgO-doped periodically poled stoichiometric LiTaO3 crystal and up to 19.3 W at 545.5 nm by frequency doubling with a lithium-triborate crystal in an external enhancement cavity.

  8. Core compressor exit stage study. Volume 4: Data and performance report for the best stage configuration

    NASA Technical Reports Server (NTRS)

    Wisler, D. C.

    1981-01-01

    The core compressor exit stage study program develops rear stage blading designs that have lower losses in their endwall boundary layer regions. The test data and performance results for the best stage configuration consisting of Rotor-B running with Stator-B are described. The technical approach in this efficiency improvement program utilizes a low speed research compressor. Tests were conducted in two ways: (1) to use four identical stages of blading to obtain test data in a true multistage environment and (2) to use a single stage of blading to compare with the multistage test results. The effects of increased rotor tip clearances and circumferential groove casing treatment are evaluated.

  9. Parallax-Robust Surveillance Video Stitching

    PubMed Central

    He, Botao; Yu, Shaohua

    2015-01-01

    This paper presents a parallax-robust video stitching technique for timely synchronized surveillance video. An efficient two-stage video stitching procedure is proposed in this paper to build wide Field-of-View (FOV) videos for surveillance applications. In the stitching model calculation stage, we develop a layered warping algorithm to align the background scenes, which is location-dependent and turned out to be more robust to parallax than the traditional global projective warping methods. On the selective seam updating stage, we propose a change-detection based optimal seam selection approach to avert ghosting and artifacts caused by moving foregrounds. Experimental results demonstrate that our procedure can efficiently stitch multi-view videos into a wide FOV video output without ghosting and noticeable seams. PMID:26712756

  10. A Computationally Efficient Method for Polyphonic Pitch Estimation

    NASA Astrophysics Data System (ADS)

    Zhou, Ruohua; Reiss, Joshua D.; Mattavelli, Marco; Zoia, Giorgio

    2009-12-01

    This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI) as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.

  11. Energy efficient engine sector combustor rig test program

    NASA Technical Reports Server (NTRS)

    Dubiel, D. J.; Greene, W.; Sundt, C. V.; Tanrikut, S.; Zeisser, M. H.

    1981-01-01

    Under the NASA-sponsored Energy Efficient Engine program, Pratt & Whitney Aircraft has successfully completed a comprehensive combustor rig test using a 90-degree sector of an advanced two-stage combustor with a segmented liner. Initial testing utilized a combustor with a conventional louvered liner and demonstrated that the Energy Efficient Engine two-stage combustor configuration is a viable system for controlling exhaust emissions, with the capability to meet all aerothermal performance goals. Goals for both carbon monoxide and unburned hydrocarbons were surpassed and the goal for oxides of nitrogen was closely approached. In another series of tests, an advanced segmented liner configuration with a unique counter-parallel FINWALL cooling system was evaluated at engine sea level takeoff pressure and temperature levels. These tests verified the structural integrity of this liner design. Overall, the results from the program have provided a high level of confidence to proceed with the scheduled Combustor Component Rig Test Program.

  12. A two-stage ultrafiltration and nanofiltration process for recycling dairy wastewater.

    PubMed

    Luo, Jianquan; Ding, Luhui; Qi, Benkun; Jaffrin, Michel Y; Wan, Yinhua

    2011-08-01

    A two-stage ultrafiltration and nanofiltration (UF/NF) process for the treatment of model dairy wastewater was investigated to recycle nutrients and water from the wastewater. Ultracel PLGC and NF270 membranes were found to be the most suitable for this purpose. In the first stage, protein and lipid were concentrated by the Ultracel PLGC UF membrane and could be used for algae cultivation to produce biodiesel and biofuel, and the permeate from UF was concentrated by the NF270 membrane in the second stage to obtain lactose in retentate and reusable water in permeate, while the NF retentate could be recycled for anaerobic digestion to produce biogas. With this approach, most of dairy wastewater could be recycled to produce reusable water and substrates for bioenergy production. Compared with the single NF process, this two-stage UF/NF process had a higher efficiency and less membrane fouling. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. A hybrid, coupled approach for modeling charged fluids from the nano to the mesoscale

    DOE PAGES

    Cheung, James; Frischknecht, Amalie L.; Perego, Mauro; ...

    2017-07-20

    Here, we develop and demonstrate a new, hybrid simulation approach for charged fluids, which combines the accuracy of the nonlocal, classical density functional theory (cDFT) with the efficiency of the Poisson–Nernst–Planck (PNP) equations. The approach is motivated by the fact that the more accurate description of the physics in the cDFT model is required only near the charged surfaces, while away from these regions the PNP equations provide an acceptable representation of the ionic system. We formulate the hybrid approach in two stages. The first stage defines a coupled hybrid model in which the PNP and cDFT equations act independentlymore » on two overlapping domains, subject to suitable interface coupling conditions. At the second stage we apply the principles of the alternating Schwarz method to the hybrid model by using the interface conditions to define the appropriate boundary conditions and volume constraints exchanged between the PNP and the cDFT subdomains. Numerical examples with two representative examples of ionic systems demonstrate the numerical properties of the method and its potential to reduce the computational cost of a full cDFT calculation, while retaining the accuracy of the latter near the charged surfaces.« less

  14. A hybrid, coupled approach for modeling charged fluids from the nano to the mesoscale

    NASA Astrophysics Data System (ADS)

    Cheung, James; Frischknecht, Amalie L.; Perego, Mauro; Bochev, Pavel

    2017-11-01

    We develop and demonstrate a new, hybrid simulation approach for charged fluids, which combines the accuracy of the nonlocal, classical density functional theory (cDFT) with the efficiency of the Poisson-Nernst-Planck (PNP) equations. The approach is motivated by the fact that the more accurate description of the physics in the cDFT model is required only near the charged surfaces, while away from these regions the PNP equations provide an acceptable representation of the ionic system. We formulate the hybrid approach in two stages. The first stage defines a coupled hybrid model in which the PNP and cDFT equations act independently on two overlapping domains, subject to suitable interface coupling conditions. At the second stage we apply the principles of the alternating Schwarz method to the hybrid model by using the interface conditions to define the appropriate boundary conditions and volume constraints exchanged between the PNP and the cDFT subdomains. Numerical examples with two representative examples of ionic systems demonstrate the numerical properties of the method and its potential to reduce the computational cost of a full cDFT calculation, while retaining the accuracy of the latter near the charged surfaces.

  15. A hybrid, coupled approach for modeling charged fluids from the nano to the mesoscale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, James; Frischknecht, Amalie L.; Perego, Mauro

    Here, we develop and demonstrate a new, hybrid simulation approach for charged fluids, which combines the accuracy of the nonlocal, classical density functional theory (cDFT) with the efficiency of the Poisson–Nernst–Planck (PNP) equations. The approach is motivated by the fact that the more accurate description of the physics in the cDFT model is required only near the charged surfaces, while away from these regions the PNP equations provide an acceptable representation of the ionic system. We formulate the hybrid approach in two stages. The first stage defines a coupled hybrid model in which the PNP and cDFT equations act independentlymore » on two overlapping domains, subject to suitable interface coupling conditions. At the second stage we apply the principles of the alternating Schwarz method to the hybrid model by using the interface conditions to define the appropriate boundary conditions and volume constraints exchanged between the PNP and the cDFT subdomains. Numerical examples with two representative examples of ionic systems demonstrate the numerical properties of the method and its potential to reduce the computational cost of a full cDFT calculation, while retaining the accuracy of the latter near the charged surfaces.« less

  16. Nonlinear electro-optic tuning of plasmonic nano-filter

    NASA Astrophysics Data System (ADS)

    Kotb, Rehab; Ismail, Yehea; Swillam, Mohamed A.

    2015-03-01

    Efficient, easy and accurate tuning techniques to a plasmonic nano-filter are investigated. The proposed filter supports both blue and red shift in the resonance wavelength. By varying the refractive index with a very small change (in the order of 10-3), the resonance wavelength can be controlled efficiently. Using Pockels material, an electrical tuning to the response of the filter is demonstrated. In addition, the behavior of the proposed filter can be controlled optically using Kerr material. A new approach of multi-stage electro-optic controlling is introduced. By cascading two stages and filling the first stage with pockels material and the second stage with kerr material, the output response of the second stage can be controlled by controlling the output response of the first stage electrically. Due to the sharp response of the proposed filter, 60nm shift in the resonance wavelength per 10 voltages is achieved. This nano-filter has compact size, low loss, sharp response and wide range of tunabilty which is highly demandable in many biological and sensing applications.

  17. A multi-stage drop-the-losers design for multi-arm clinical trials.

    PubMed

    Wason, James; Stallard, Nigel; Bowden, Jack; Jennison, Christopher

    2017-02-01

    Multi-arm multi-stage trials can improve the efficiency of the drug development process when multiple new treatments are available for testing. A group-sequential approach can be used in order to design multi-arm multi-stage trials, using an extension to Dunnett's multiple-testing procedure. The actual sample size used in such a trial is a random variable that has high variability. This can cause problems when applying for funding as the cost will also be generally highly variable. This motivates a type of design that provides the efficiency advantages of a group-sequential multi-arm multi-stage design, but has a fixed sample size. One such design is the two-stage drop-the-losers design, in which a number of experimental treatments, and a control treatment, are assessed at a prescheduled interim analysis. The best-performing experimental treatment and the control treatment then continue to a second stage. In this paper, we discuss extending this design to have more than two stages, which is shown to considerably reduce the sample size required. We also compare the resulting sample size requirements to the sample size distribution of analogous group-sequential multi-arm multi-stage designs. The sample size required for a multi-stage drop-the-losers design is usually higher than, but close to, the median sample size of a group-sequential multi-arm multi-stage trial. In many practical scenarios, the disadvantage of a slight loss in average efficiency would be overcome by the huge advantage of a fixed sample size. We assess the impact of delay between recruitment and assessment as well as unknown variance on the drop-the-losers designs.

  18. Cost and efficiency of disaster waste disposal: A case study of the Great East Japan Earthquake.

    PubMed

    Sasao, Toshiaki

    2016-12-01

    This paper analyzes the cost and efficiency of waste disposal associated with the Great East Japan Earthquake. The following two analyses were performed: (1) a popular parametric approach, which is an ordinary least squares (OLS) method to estimate the factors that affect the disposal costs; (2) a non-parametric approach, which is a two-stage data envelopment analysis (DEA) to analyze the efficiency of each municipality and clarify the best performance of the disaster waste management. Our results indicate that a higher recycling rate of disaster waste and a larger amount of tsunami sediments decrease the average disposal costs. Our results also indicate that area-wide management increases the average cost. In addition, the efficiency scores were observed to vary widely by municipality, and more temporary incinerators and secondary waste stocks improve the efficiency scores. However, it is likely that the radioactive contamination from the Fukushima Daiichi nuclear power station influenced the results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Measurement of Low Carbon Economy Efficiency with a Three-Stage Data Envelopment Analysis: A Comparison of the Largest Twenty CO2 Emitting Countries

    PubMed Central

    Liu, Xiang; Liu, Jia

    2016-01-01

    This paper employs a three-stage approach to estimate low carbon economy efficiency in the largest twenty CO2 emitting countries from 2000 to 2012. The approach includes the following three stages: (1) use of a data envelopment analysis (DEA) model with undesirable output to estimate the low carbon economy efficiency and calculate the input and output slacks; (2) use of a stochastic frontier approach to eliminate the impacts of external environment variables on these slacks; (3) re-estimation of the efficiency with adjusted inputs and outputs to reflect the capacity of the government to develop a low carbon economy. The results indicate that the low carbon economy efficiency performances in these countries had worsened during the studied period. The performances in the third stage are larger than that in the first stage. Moreover, in general, low carbon economy efficiency in Annex I countries of the United Nations Framework Convention on Climate Change (UNFCCC) is better than that in Non-Annex I countries. However, the gap of the average efficiency score between Annex I and Non-Annex I countries in the first stage is smaller than that in the third stage. It implies that the external environment variables show greater influence on Non-Annex I countries than that on Annex I countries. These external environment variables should be taken into account in the transnational negotiation of the responsibility of promoting CO2 reductions. Most importantly, the developed countries (mostly in Annex I) should help the developing countries (mostly in Non-Annex I) to reduce carbon emission by opening or expanding the trade, such as encouraging the import and export of the energy-saving and sharing emission reduction technology. PMID:27834890

  20. Measurement of Low Carbon Economy Efficiency with a Three-Stage Data Envelopment Analysis: A Comparison of the Largest Twenty CO₂ Emitting Countries.

    PubMed

    Liu, Xiang; Liu, Jia

    2016-11-09

    This paper employs a three-stage approach to estimate low carbon economy efficiency in the largest twenty CO₂ emitting countries from 2000 to 2012. The approach includes the following three stages: (1) use of a data envelopment analysis (DEA) model with undesirable output to estimate the low carbon economy efficiency and calculate the input and output slacks; (2) use of a stochastic frontier approach to eliminate the impacts of external environment variables on these slacks; (3) re-estimation of the efficiency with adjusted inputs and outputs to reflect the capacity of the government to develop a low carbon economy. The results indicate that the low carbon economy efficiency performances in these countries had worsened during the studied period. The performances in the third stage are larger than that in the first stage. Moreover, in general, low carbon economy efficiency in Annex I countries of the United Nations Framework Convention on Climate Change (UNFCCC) is better than that in Non-Annex I countries. However, the gap of the average efficiency score between Annex I and Non-Annex I countries in the first stage is smaller than that in the third stage. It implies that the external environment variables show greater influence on Non-Annex I countries than that on Annex I countries. These external environment variables should be taken into account in the transnational negotiation of the responsibility of promoting CO₂ reductions. Most importantly, the developed countries (mostly in Annex I) should help the developing countries (mostly in Non-Annex I) to reduce carbon emission by opening or expanding the trade, such as encouraging the import and export of the energy-saving and sharing emission reduction technology.

  1. Highly loaded multi-stage fan drive turbine - performance of initial seven configurations

    NASA Technical Reports Server (NTRS)

    Wolfmeyer, G. W.; Thomas, M. W.

    1974-01-01

    Experimental results of a three-stage highly loaded fan drive turbine test program are presented. A plain blade turbine, a tandem blade turbine, and a tangentially leaned stator turbine were designed for the same velocity diagram and flowpath. Seven combinations of bladerows were tested to evaluate stage performances and effects of the tandem blading and leaned stator. The plain blade turbine design point total-to-total efficiency was 0.886. The turbine with the stage three leaned stator had the same efficiency with an improved exit swirl profile and increased hub reaction. Two-stage group tests showed that the two-stage turbine with tandem stage two stator had an efficiency of 0.880 compared to 0.868 for the plain blade two-stage turbine.

  2. A note on the efficiencies of sampling strategies in two-stage Bayesian regional fine mapping of a quantitative trait.

    PubMed

    Chen, Zhijian; Craiu, Radu V; Bull, Shelley B

    2014-11-01

    In focused studies designed to follow up associations detected in a genome-wide association study (GWAS), investigators can proceed to fine-map a genomic region by targeted sequencing or dense genotyping of all variants in the region, aiming to identify a functional sequence variant. For the analysis of a quantitative trait, we consider a Bayesian approach to fine-mapping study design that incorporates stratification according to a promising GWAS tag SNP in the same region. Improved cost-efficiency can be achieved when the fine-mapping phase incorporates a two-stage design, with identification of a smaller set of more promising variants in a subsample taken in stage 1, followed by their evaluation in an independent stage 2 subsample. To avoid the potential negative impact of genetic model misspecification on inference we incorporate genetic model selection based on posterior probabilities for each competing model. Our simulation study shows that, compared to simple random sampling that ignores genetic information from GWAS, tag-SNP-based stratified sample allocation methods reduce the number of variants continuing to stage 2 and are more likely to promote the functional sequence variant into confirmation studies. © 2014 WILEY PERIODICALS, INC.

  3. Inventory slack routing application in emergency logistics and relief distributions.

    PubMed

    Yang, Xianfeng; Hao, Wei; Lu, Yang

    2018-01-01

    Various natural and manmade disasters during last decades have highlighted the need of further improving on governmental preparedness to emergency events, and a relief supplies distribution problem named Inventory Slack Routing Problem (ISRP) has received increasing attentions. In an ISRP, inventory slack is defined as the duration between reliefs arriving time and estimated inventory stock-out time. Hence, a larger inventory slack could grant more responsive time in facing of various factors (e.g., traffic congestion) that may lead to delivery lateness. In this study, the relief distribution problem is formulated as an optimization model that maximize the minimum slack among all dispensing sites. To efficiently solve this problem, we propose a two-stage approach to tackle the vehicle routing and relief allocation sub-problems. By analyzing the inter-relations between these two sub-problems, a new objective function considering both delivery durations and dispensing rates of demand sites is applied in the first stage to design the vehicle routes. A hierarchical routing approach and a sweep approach are also proposed in this stage. Given the vehicle routing plan, the relief allocation could be easily solved in the second stage. Numerical experiment with a comparison of multi-vehicle Traveling Salesman Problem (TSP) has demonstrated the need of ISRP and the capability of the proposed solution approaches.

  4. Inventory slack routing application in emergency logistics and relief distributions

    PubMed Central

    Yang, Xianfeng; Lu, Yang

    2018-01-01

    Various natural and manmade disasters during last decades have highlighted the need of further improving on governmental preparedness to emergency events, and a relief supplies distribution problem named Inventory Slack Routing Problem (ISRP) has received increasing attentions. In an ISRP, inventory slack is defined as the duration between reliefs arriving time and estimated inventory stock-out time. Hence, a larger inventory slack could grant more responsive time in facing of various factors (e.g., traffic congestion) that may lead to delivery lateness. In this study, the relief distribution problem is formulated as an optimization model that maximize the minimum slack among all dispensing sites. To efficiently solve this problem, we propose a two-stage approach to tackle the vehicle routing and relief allocation sub-problems. By analyzing the inter-relations between these two sub-problems, a new objective function considering both delivery durations and dispensing rates of demand sites is applied in the first stage to design the vehicle routes. A hierarchical routing approach and a sweep approach are also proposed in this stage. Given the vehicle routing plan, the relief allocation could be easily solved in the second stage. Numerical experiment with a comparison of multi-vehicle Traveling Salesman Problem (TSP) has demonstrated the need of ISRP and the capability of the proposed solution approaches. PMID:29902196

  5. Optimization of storage tank locations in an urban stormwater drainage system using a two-stage approach.

    PubMed

    Wang, Mingming; Sun, Yuanxiang; Sweetapple, Chris

    2017-12-15

    Storage is important for flood mitigation and non-point source pollution control. However, to seek a cost-effective design scheme for storage tanks is very complex. This paper presents a two-stage optimization framework to find an optimal scheme for storage tanks using storm water management model (SWMM). The objectives are to minimize flooding, total suspended solids (TSS) load and storage cost. The framework includes two modules: (i) the analytical module, which evaluates and ranks the flooding nodes with the analytic hierarchy process (AHP) using two indicators (flood depth and flood duration), and then obtains the preliminary scheme by calculating two efficiency indicators (flood reduction efficiency and TSS reduction efficiency); (ii) the iteration module, which obtains an optimal scheme using a generalized pattern search (GPS) method based on the preliminary scheme generated by the analytical module. The proposed approach was applied to a catchment in CZ city, China, to test its capability in choosing design alternatives. Different rainfall scenarios are considered to test its robustness. The results demonstrate that the optimal framework is feasible, and the optimization is fast based on the preliminary scheme. The optimized scheme is better than the preliminary scheme for reducing runoff and pollutant loads under a given storage cost. The multi-objective optimization framework presented in this paper may be useful in finding the best scheme of storage tanks or low impact development (LID) controls. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Regression analysis on the variation in efficiency frontiers for prevention stage of HIV/AIDS.

    PubMed

    Kamae, Maki S; Kamae, Isao; Cohen, Joshua T; Neumann, Peter J

    2011-01-01

    To investigate how the cost effectiveness of preventing HIV/AIDS varies across possible efficiency frontiers (EFs) by taking into account potentially relevant external factors, such as prevention stage, and how the EFs can be characterized using regression analysis given uncertainty of the QALY-cost estimates. We reviewed cost-effectiveness estimates for the prevention and treatment of HIV/AIDS published from 2002-2007 and catalogued in the Tufts Medical Center Cost-Effectiveness Analysis (CEA) Registry. We constructed efficiency frontier (EF) curves by plotting QALYs against costs, using methods used by the Institute for Quality and Efficiency in Health Care (IQWiG) in Germany. We stratified the QALY-cost ratios by prevention stage, country of study, and payer perspective, and estimated EF equations using log and square-root models. A total of 53 QALY-cost ratios were identified for HIV/AIDS in the Tufts CEA Registry. Plotted ratios stratified by prevention stage were visually grouped into a cluster consisting of primary/secondary prevention measures and a cluster consisting of tertiary measures. Correlation coefficients for each cluster were statistically significant. For each cluster, we derived two EF equations - one based on the log model, and one based on the square-root model. Our findings indicate that stratification of HIV/AIDS interventions by prevention stage can yield distinct EFs, and that the correlation and regression analyses are useful for parametrically characterizing EF equations. Our study has certain limitations, such as the small number of included articles and the potential for study populations to be non-representative of countries of interest. Nonetheless, our approach could help develop a deeper appreciation of cost effectiveness beyond the deterministic approach developed by IQWiG.

  7. Design and overall performance of four highly loaded, high speed inlet stages for an advanced high-pressure-ratio core compressor

    NASA Technical Reports Server (NTRS)

    Reid, L.; Moore, R. D.

    1978-01-01

    The detailed design and overall performances of four inlet stages for an advanced core compressor are presented. These four stages represent two levels of design total pressure ratio (1.82 and 2.05), two levels of rotor aspect ratio (1.19 and 1.63), and two levels of stator aspect ratio (1.26 and 1.78). The individual stages were tested over the stable operating flow range at 70, 90, and 100 percent of design speeds. The performances of the low aspect ratio configurations were substantially better than those of the high aspect ratio configurations. The two low aspect ratio configurations achieved peak efficiencies of 0.876 and 0.872 and corresponding stage efficiencies of 0.845 and 0.840. The high aspect ratio configurations achieved peak ratio efficiencies of 0.851 and 0.849 and corresponding stage efficiencies of 0.821 and 0.831.

  8. An adaptive Gaussian process-based method for efficient Bayesian experimental design in groundwater contaminant source identification problems: ADAPTIVE GAUSSIAN PROCESS-BASED INVERSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Zeng, Lingzao

    Surrogate models are commonly used in Bayesian approaches such as Markov Chain Monte Carlo (MCMC) to avoid repetitive CPU-demanding model evaluations. However, the approximation error of a surrogate may lead to biased estimations of the posterior distribution. This bias can be corrected by constructing a very accurate surrogate or implementing MCMC in a two-stage manner. Since the two-stage MCMC requires extra original model evaluations, the computational cost is still high. If the information of measurement is incorporated, a locally accurate approximation of the original model can be adaptively constructed with low computational cost. Based on this idea, we propose amore » Gaussian process (GP) surrogate-based Bayesian experimental design and parameter estimation approach for groundwater contaminant source identification problems. A major advantage of the GP surrogate is that it provides a convenient estimation of the approximation error, which can be incorporated in the Bayesian formula to avoid over-confident estimation of the posterior distribution. The proposed approach is tested with a numerical case study. Without sacrificing the estimation accuracy, the new approach achieves about 200 times of speed-up compared to our previous work using two-stage MCMC.« less

  9. Efficient video-equipped fire detection approach for automatic fire alarm systems

    NASA Astrophysics Data System (ADS)

    Kang, Myeongsu; Tung, Truong Xuan; Kim, Jong-Myon

    2013-01-01

    This paper proposes an efficient four-stage approach that automatically detects fire using video capabilities. In the first stage, an approximate median method is used to detect video frame regions involving motion. In the second stage, a fuzzy c-means-based clustering algorithm is employed to extract candidate regions of fire from all of the movement-containing regions. In the third stage, a gray level co-occurrence matrix is used to extract texture parameters by tracking red-colored objects in the candidate regions. These texture features are, subsequently, used as inputs of a back-propagation neural network to distinguish between fire and nonfire. Experimental results indicate that the proposed four-stage approach outperforms other fire detection algorithms in terms of consistently increasing the accuracy of fire detection in both indoor and outdoor test videos.

  10. Determinants of efficiency in the provision of municipal street-cleaning and refuse collection services

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benito-Lopez, Bernardino, E-mail: benitobl@um.es; Rocio Moreno-Enguix, Maria del, E-mail: mrmoreno@um.es; Solana-Ibanez, Jose, E-mail: jsolana@um.es

    Effective waste management systems can make critical contributions to public health, environmental sustainability and economic development. The challenge affects every person and institution in society, and measures cannot be undertaken without data collection and a quantitative analysis approach. In this paper, the two-stage double bootstrap procedure of is used to estimate the efficiency determinants of Spanish local entities in the provision of public street-cleaning and refuse collection services. The purpose is to identify factors that influence efficiency. The final sample comprised 1072 municipalities. In the first stage, robust efficiency estimates are obtained with Data Envelopment Analysis (DEA). We apply themore » second stage, based on a truncated-regression, to estimate the effect of a group of environmental factors on DEA estimates. The results show the existence of a significant relation between efficiency and all the variables analysed (per capita income, urban population density, the comparative index of the importance of tourism and that of the whole economic activity). We have also considered the influence of a dummy categorical variable - the political sign of the governing party - on the efficient provision of the services under study. The results from the methodology proposed show that municipalities governed by progressive parties are more efficient.« less

  11. Determinants of efficiency in the provision of municipal street-cleaning and refuse collection services.

    PubMed

    Benito-López, Bernardino; Moreno-Enguix, María del Rocio; Solana-Ibañez, José

    2011-06-01

    Effective waste management systems can make critical contributions to public health, environmental sustainability and economic development. The challenge affects every person and institution in society, and measures cannot be undertaken without data collection and a quantitative analysis approach. In this paper, the two-stage double bootstrap procedure of Simar and Wilson (2007) is used to estimate the efficiency determinants of Spanish local entities in the provision of public street-cleaning and refuse collection services. The purpose is to identify factors that influence efficiency. The final sample comprised 1072 municipalities. In the first stage, robust efficiency estimates are obtained with Data Envelopment Analysis (DEA). We apply the second stage, based on a truncated-regression, to estimate the effect of a group of environmental factors on DEA estimates. The results show the existence of a significant relation between efficiency and all the variables analysed (per capita income, urban population density, the comparative index of the importance of tourism and that of the whole economic activity). We have also considered the influence of a dummy categorical variable - the political sign of the governing party - on the efficient provision of the services under study. The results from the methodology proposed show that municipalities governed by progressive parties are more efficient. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Efficient Fingercode Classification

    NASA Astrophysics Data System (ADS)

    Sun, Hong-Wei; Law, Kwok-Yan; Gollmann, Dieter; Chung, Siu-Leung; Li, Jian-Bin; Sun, Jia-Guang

    In this paper, we present an efficient fingerprint classification algorithm which is an essential component in many critical security application systems e. g. systems in the e-government and e-finance domains. Fingerprint identification is one of the most important security requirements in homeland security systems such as personnel screening and anti-money laundering. The problem of fingerprint identification involves searching (matching) the fingerprint of a person against each of the fingerprints of all registered persons. To enhance performance and reliability, a common approach is to reduce the search space by firstly classifying the fingerprints and then performing the search in the respective class. Jain et al. proposed a fingerprint classification algorithm based on a two-stage classifier, which uses a K-nearest neighbor classifier in its first stage. The fingerprint classification algorithm is based on the fingercode representation which is an encoding of fingerprints that has been demonstrated to be an effective fingerprint biometric scheme because of its ability to capture both local and global details in a fingerprint image. We enhance this approach by improving the efficiency of the K-nearest neighbor classifier for fingercode-based fingerprint classification. Our research firstly investigates the various fast search algorithms in vector quantization (VQ) and the potential application in fingerprint classification, and then proposes two efficient algorithms based on the pyramid-based search algorithms in VQ. Experimental results on DB1 of FVC 2004 demonstrate that our algorithms can outperform the full search algorithm and the original pyramid-based search algorithms in terms of computational efficiency without sacrificing accuracy.

  13. A combined MOIP-MCDA approach to building and screening atmospheric pollution control strategies in urban regions.

    PubMed

    Mavrotas, George; Ziomas, Ioannis C; Diakouaki, Danae

    2006-07-01

    This article presents a methodological approach for the formulation of control strategies capable of reducing atmospheric pollution at the standards set by European legislation. The approach was implemented in the greater area of Thessaloniki and was part of a project aiming at the compliance with air quality standards in five major cities in Greece. The methodological approach comprises two stages: in the first stage, the availability of several measures contributing to a certain extent to reducing atmospheric pollution indicates a combinatorial problem and favors the use of Integer Programming. More specifically, Multiple Objective Integer Programming is used in order to generate alternative efficient combinations of the available policy measures on the basis of two conflicting objectives: public expenditure minimization and social acceptance maximization. In the second stage, these combinations of control measures (i.e., the control strategies) are then comparatively evaluated with respect to a wider set of criteria, using tools from Multiple Criteria Decision Analysis, namely, the well-known PROMETHEE method. The whole procedure is based on the active involvement of local and central authorities in order to incorporate their concerns and preferences, as well as to secure the adoption and implementation of the resulting solution.

  14. A Combined MOIP-MCDA Approach to Building and Screening Atmospheric Pollution Control Strategies in Urban Regions

    NASA Astrophysics Data System (ADS)

    Mavrotas, George; Ziomas, Ioannis C.; Diakouaki, Danae

    2006-07-01

    This article presents a methodological approach for the formulation of control strategies capable of reducing atmospheric pollution at the standards set by European legislation. The approach was implemented in the greater area of Thessaloniki and was part of a project aiming at the compliance with air quality standards in five major cities in Greece. The methodological approach comprises two stages: in the first stage, the availability of several measures contributing to a certain extent to reducing atmospheric pollution indicates a combinatorial problem and favors the use of Integer Programming. More specifically, Multiple Objective Integer Programming is used in order to generate alternative efficient combinations of the available policy measures on the basis of two conflicting objectives: public expenditure minimization and social acceptance maximization. In the second stage, these combinations of control measures (i.e., the control strategies) are then comparatively evaluated with respect to a wider set of criteria, using tools from Multiple Criteria Decision Analysis, namely, the well-known PROMETHEE method. The whole procedure is based on the active involvement of local and central authorities in order to incorporate their concerns and preferences, as well as to secure the adoption and implementation of the resulting solution.

  15. A feasibility study of stateful automaton packet inspection for streaming application detection systems

    NASA Astrophysics Data System (ADS)

    Tseng, Kuo-Kun; Lo, Jiao; Liu, Yiming; Chang, Shih-Hao; Merabti, Madjid; Ng, Felix, C. K.; Wu, C. H.

    2017-10-01

    The rapid development of the internet has brought huge benefits and social impacts; however, internet security has also become a great problem for users, since traditional approaches to packet classification cannot achieve satisfactory detection performance due to their low accuracy and efficiency. In this paper, a new stateful packet inspection method is introduced, which can be embedded in the network gateway and used by a streaming application detection system. This new detection method leverages the inexact automaton approach, using part of the header field and part of the application layer data of a packet. Based on this approach, an advanced detection system is proposed for streaming applications. The workflow of the system involves two stages: the training stage and the detection stage. In the training stage, the system initially captures characteristic patterns from a set of application packet flows. After this training is completed, the detection stage allows the user to detect the target application by capturing new application flows. This new detection approach is also evaluated using experimental analysis; the results of this analysis show that this new approach not only simplifies the management of the state detection system, but also improves the accuracy of data flow detection, making it feasible for real-world network applications.

  16. The video watermarking container: efficient real-time transaction watermarking

    NASA Astrophysics Data System (ADS)

    Wolf, Patrick; Hauer, Enrico; Steinebach, Martin

    2008-02-01

    When transaction watermarking is used to secure sales in online shops by embedding transaction specific watermarks, the major challenge is embedding efficiency: Maximum speed by minimal workload. This is true for all types of media. Video transaction watermarking presents a double challenge. Video files not only are larger than for example music files of the same playback time. In addition, video watermarking algorithms have a higher complexity than algorithms for other types of media. Therefore online shops that want to protect their videos by transaction watermarking are faced with the problem that their servers need to work harder and longer for every sold medium in comparison to audio sales. In the past, many algorithms responded to this challenge by reducing their complexity. But this usually results in a loss of either robustness or transparency. This paper presents a different approach. The container technology separates watermark embedding into two stages: A preparation stage and the finalization stage. In the preparation stage, the video is divided into embedding segments. For each segment one copy marked with "0" and anther one marked with "1" is created. This stage is computationally expensive but only needs to be done once. In the finalization stage, the watermarked video is assembled from the embedding segments according to the watermark message. This stage is very fast and involves no complex computations. It thus allows efficient creation of individually watermarked video files.

  17. An unsupervised two-stage clustering approach for forest structure classification based on X-band InSAR data - A case study in complex temperate forest stands

    NASA Astrophysics Data System (ADS)

    Abdullahi, Sahra; Schardt, Mathias; Pretzsch, Hans

    2017-05-01

    Forest structure at stand level plays a key role for sustainable forest management, since the biodiversity, productivity, growth and stability of the forest can be positively influenced by managing its structural diversity. In contrast to field-based measurements, remote sensing techniques offer a cost-efficient opportunity to collect area-wide information about forest stand structure with high spatial and temporal resolution. Especially Interferometric Synthetic Aperture Radar (InSAR), which facilitates worldwide acquisition of 3d information independent from weather conditions and illumination, is convenient to capture forest stand structure. This study purposes an unsupervised two-stage clustering approach for forest structure classification based on height information derived from interferometric X-band SAR data which was performed in complex temperate forest stands of Traunstein forest (South Germany). In particular, a four dimensional input data set composed of first-order height statistics was non-linearly projected on a two-dimensional Self-Organizing Map, spatially ordered according to similarity (based on the Euclidean distance) in the first stage and classified using the k-means algorithm in the second stage. The study demonstrated that X-band InSAR data exhibits considerable capabilities for forest structure classification. Moreover, the unsupervised classification approach achieved meaningful and reasonable results by means of comparison to aerial imagery and LiDAR data.

  18. The effects of health information technology adoption and hospital-physician integration on hospital efficiency.

    PubMed

    Cho, Na-Eun; Chang, Jongwha; Atems, Bebonchu

    2014-11-01

    To determine the impact of health information technology (HIT) adoption and hospital-physician integration on hospital efficiency. Using 2010 data from the American Hospital Association's (AHA) annual survey, the AHA IT survey, supplemented by the CMS Case Mix Index, and the US Census Bureau's small area income and poverty estimates, we examined how the adoption of HIT and employment of physicians affected hospital efficiency and whether they were substitutes or complements. The sample included 2173 hospitals. We employed a 2-stage approach. In the first stage, data envelopment analysis was used to estimate technical efficiency of hospitals. In the second stage, we used instrumental variable approaches, notably 2-stage least squares and the generalized method of moments, to examine the effects of IT adoption and integration on hospital efficiency. We found that HIT adoption and hospital-physician integration, when considered separately, each have statistically significant positive impacts on hospital efficiency. Also, we found that hospitals that adopted HIT with employed physicians will achieve less efficiency compared with hospitals that adopted HIT without employed physicians. Although HIT adoption and hospital-physician integration both seem to be key parts of improving hospital efficiency when one or the other is utilized individually, they can hurt hospital efficiency when utilized together.

  19. Interactive two-stage stochastic fuzzy programming for water resources management.

    PubMed

    Wang, S; Huang, G H

    2011-08-01

    In this study, an interactive two-stage stochastic fuzzy programming (ITSFP) approach has been developed through incorporating an interactive fuzzy resolution (IFR) method within an inexact two-stage stochastic programming (ITSP) framework. ITSFP can not only tackle dual uncertainties presented as fuzzy boundary intervals that exist in the objective function and the left- and right-hand sides of constraints, but also permit in-depth analyses of various policy scenarios that are associated with different levels of economic penalties when the promised policy targets are violated. A management problem in terms of water resources allocation has been studied to illustrate applicability of the proposed approach. The results indicate that a set of solutions under different feasibility degrees has been generated for planning the water resources allocation. They can help the decision makers (DMs) to conduct in-depth analyses of tradeoffs between economic efficiency and constraint-violation risk, as well as enable them to identify, in an interactive way, a desired compromise between satisfaction degree of the goal and feasibility of the constraints (i.e., risk of constraint violation). Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  1. Ngas Multi-Stage Coaxial High Efficiency Cooler (hec)

    NASA Astrophysics Data System (ADS)

    Nguyen, T.; Toma, G.; Jaco, C.; Raab, J.

    2010-04-01

    This paper presents the performance data of the single and two-stage High Efficiency Cooler (HEC) tested with coaxial cold heads. The single stage coaxial cold head has been optimized to operate at temperatures of 40 K and above. The two-stage parallel cold head configuration has been optimized to operate at 30 K and above and provides a long-life, low mass and efficient two-stage version of the Northrop Grumman Aerospace Systems (NGAS) flight qualified single stage HEC cooler. The HEC pulse tube cryocoolers are the latest generation of flight coolers with heritage to the 12 Northrop Grumman Aerospace Systems (NGAS) coolers currently on orbit with 2 operating for more than 11.5 years. This paper presents the performance data of the one and two-stage versions of this cooler under a wide range of heat rejection temperature, cold head temperature and input power.

  2. An efficient two-stage approach for image-based FSI analysis of atherosclerotic arteries

    PubMed Central

    Rayz, Vitaliy L.; Mofrad, Mohammad R. K.; Saloner, David

    2010-01-01

    Patient-specific biomechanical modeling of atherosclerotic arteries has the potential to aid clinicians in characterizing lesions and determining optimal treatment plans. To attain high levels of accuracy, recent models use medical imaging data to determine plaque component boundaries in three dimensions, and fluid–structure interaction is used to capture mechanical loading of the diseased vessel. As the plaque components and vessel wall are often highly complex in shape, constructing a suitable structured computational mesh is very challenging and can require a great deal of time. Models based on unstructured computational meshes require relatively less time to construct and are capable of accurately representing plaque components in three dimensions. These models unfortunately require additional computational resources and computing time for accurate and meaningful results. A two-stage modeling strategy based on unstructured computational meshes is proposed to achieve a reasonable balance between meshing difficulty and computational resource and time demand. In this method, a coarsegrained simulation of the full arterial domain is used to guide and constrain a fine-scale simulation of a smaller region of interest within the full domain. Results for a patient-specific carotid bifurcation model demonstrate that the two-stage approach can afford a large savings in both time for mesh generation and time and resources needed for computation. The effects of solid and fluid domain truncation were explored, and were shown to minimally affect accuracy of the stress fields predicted with the two-stage approach. PMID:19756798

  3. The Artificial Neural Networks Based on Scalarization Method for a Class of Bilevel Biobjective Programming Problem

    PubMed Central

    Chen, Zhong; Liu, June; Li, Xiong

    2017-01-01

    A two-stage artificial neural network (ANN) based on scalarization method is proposed for bilevel biobjective programming problem (BLBOP). The induced set of the BLBOP is firstly expressed as the set of minimal solutions of a biobjective optimization problem by using scalar approach, and then the whole efficient set of the BLBOP is derived by the proposed two-stage ANN for exploring the induced set. In order to illustrate the proposed method, seven numerical examples are tested and compared with results in the classical literature. Finally, a practical problem is solved by the proposed algorithm. PMID:29312446

  4. Preliminary Design Optimization For A Supersonic Turbine For Rocket Propulsion

    NASA Technical Reports Server (NTRS)

    Papila, Nilay; Shyy, Wei; Griffin, Lisa; Huber, Frank; Tran, Ken; McConnaughey, Helen (Technical Monitor)

    2000-01-01

    In this study, we present a method for optimizing, at the preliminary design level, a supersonic turbine for rocket propulsion system application. Single-, two- and three-stage turbines are considered with the number of design variables increasing from 6 to 11 then to 15, in accordance with the number of stages. Due to its global nature and flexibility in handling different types of information, the response surface methodology (RSM) is applied in the present study. A major goal of the present Optimization effort is to balance the desire of maximizing aerodynamic performance and minimizing weight. To ascertain required predictive capability of the RSM, a two-level domain refinement approach has been adopted. The accuracy of the predicted optimal design points based on this strategy is shown to he satisfactory. Our investigation indicates that the efficiency rises quickly from single stage to 2 stages but that the increase is much less pronounced with 3 stages. A 1-stage turbine performs poorly under the engine balance boundary condition. A portion of fluid kinetic energy is lost at the turbine discharge of the 1-stage design due to high stage pressure ratio and high-energy content, mostly hydrogen, of the working fluid. Regarding the optimization technique, issues related to the design of experiments (DOE) has also been investigated. It is demonstrated that the criteria for selecting the data base exhibit significant impact on the efficiency and effectiveness of the construction of the response surface.

  5. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  6. Structure design of and experimental research on a two-stage laval foam breaker for foam fluid recycling.

    PubMed

    Wang, Jin-song; Cao, Pin-lu; Yin, Kun

    2015-07-01

    Environmental, economical and efficient antifoaming technology is the basis for achievement of foam drilling fluid recycling. The present study designed a novel two-stage laval mechanical foam breaker that primarily uses vacuum generated by Coanda effect and Laval principle to break foam. Numerical simulation results showed that the value and distribution of negative pressure of two-stage laval foam breaker were larger than that of the normal foam breaker. Experimental results showed that foam-breaking efficiency of two-stage laval foam breaker was higher than that of normal foam breaker, when gas-to-liquid ratio and liquid flow rate changed. The foam-breaking efficiency of normal foam breaker decreased rapidly with increasing foam stability, whereas the two-stage laval foam breaker remained unchanged. Foam base fluid would be recycled using two-stage laval foam breaker, which would reduce the foam drilling cost sharply and waste disposals that adverse by affect the environment.

  7. A hierarchical approach for online temporal lobe seizure detection in long-term intracranial EEG recordings

    NASA Astrophysics Data System (ADS)

    Liang, Sheng-Fu; Chen, Yi-Chun; Wang, Yu-Lin; Chen, Pin-Tzu; Yang, Chia-Hsiang; Chiueh, Herming

    2013-08-01

    Objective. Around 1% of the world's population is affected by epilepsy, and nearly 25% of patients cannot be treated effectively by available therapies. The presence of closed-loop seizure-triggered stimulation provides a promising solution for these patients. Realization of fast, accurate, and energy-efficient seizure detection is the key to such implants. In this study, we propose a two-stage on-line seizure detection algorithm with low-energy consumption for temporal lobe epilepsy (TLE). Approach. Multi-channel signals are processed through independent component analysis and the most representative independent component (IC) is automatically selected to eliminate artifacts. Seizure-like intracranial electroencephalogram (iEEG) segments are fast detected in the first stage of the proposed method and these seizures are confirmed in the second stage. The conditional activation of the second-stage signal processing reduces the computational effort, and hence energy, since most of the non-seizure events are filtered out in the first stage. Main results. Long-term iEEG recordings of 11 patients who suffered from TLE were analyzed via leave-one-out cross validation. The proposed method has a detection accuracy of 95.24%, a false alarm rate of 0.09/h, and an average detection delay time of 9.2 s. For the six patients with mesial TLE, a detection accuracy of 100.0%, a false alarm rate of 0.06/h, and an average detection delay time of 4.8 s can be achieved. The hierarchical approach provides a 90% energy reduction, yielding effective and energy-efficient implementation for real-time epileptic seizure detection. Significance. An on-line seizure detection method that can be applied to monitor continuous iEEG signals of patients who suffered from TLE was developed. An IC selection strategy to automatically determine the most seizure-related IC for seizure detection was also proposed. The system has advantages of (1) high detection accuracy, (2) low false alarm, (3) short detection latency, and (4) energy-efficient design for hardware implementation.

  8. A Two-Stage Kalman Filter Approach for Robust and Real-Time Power System State Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jinghe; Welch, Greg; Bishop, Gary

    2014-04-01

    As electricity demand continues to grow and renewable energy increases its penetration in the power grid, realtime state estimation becomes essential for system monitoring and control. Recent development in phasor technology makes it possible with high-speed time-synchronized data provided by Phasor Measurement Units (PMU). In this paper we present a two-stage Kalman filter approach to estimate the static state of voltage magnitudes and phase angles, as well as the dynamic state of generator rotor angles and speeds. Kalman filters achieve optimal performance only when the system noise characteristics have known statistical properties (zero-mean, Gaussian, and spectrally white). However in practicemore » the process and measurement noise models are usually difficult to obtain. Thus we have developed the Adaptive Kalman Filter with Inflatable Noise Variances (AKF with InNoVa), an algorithm that can efficiently identify and reduce the impact of incorrect system modeling and/or erroneous measurements. In stage one, we estimate the static state from raw PMU measurements using the AKF with InNoVa; then in stage two, the estimated static state is fed into an extended Kalman filter to estimate the dynamic state. Simulations demonstrate its robustness to sudden changes of system dynamics and erroneous measurements.« less

  9. Factors limiting device efficiency in organic photovoltaics.

    PubMed

    Janssen, René A J; Nelson, Jenny

    2013-04-04

    The power conversion efficiency of the most efficient organic photovoltaic (OPV) cells has recently increased to over 10%. To enable further increases, the factors limiting the device efficiency in OPV must be identified. In this review, the operational mechanism of OPV cells is explained and the detailed balance limit to photovoltaic energy conversion, as developed by Shockley and Queisser, is outlined. The various approaches that have been developed to estimate the maximum practically achievable efficiency in OPV are then discussed, based on empirical knowledge of organic semiconductor materials. Subsequently, approaches made to adapt the detailed balance theory to incorporate some of the fundamentally different processes in organic solar cells that originate from using a combination of two complementary, donor and acceptor, organic semiconductors using thermodynamic and kinetic approaches are described. The more empirical formulations to the efficiency limits provide estimates of 10-12%, but the more fundamental descriptions suggest limits of 20-24% to be reachable in single junctions, similar to the highest efficiencies obtained for crystalline silicon p-n junction solar cells. Closing this gap sets the stage for future materials research and development of OPV. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Performance of two-stage fan having low-aspect-ratio first-stage rotor blading

    NASA Technical Reports Server (NTRS)

    Urasek, D. C.; Gorrell, W. T.; Cunnan, W. S.

    1979-01-01

    The NASA two stage fan was tested with a low aspect ratio first stage rotor having no midspan dampers. At design speed the fan achieved an adiabatic design efficiency of 0.846, and peak efficiencies for the first stage and rotor of 0.870 and 0.906, respectively. Peak efficiency occurred very close to the stall line. In an attempt to improve stall margin, the fan was retested with circumferentially grooved casing treatment and with a series of stator blade resets. Results showed no improvement in stall margin with casing treatment but increased to 8 percent with stator blade reset.

  11. Effect of tip clearance on performance of small axial hydraulic turbine

    NASA Technical Reports Server (NTRS)

    Boynton, J. L.; Rohlik, H. E.

    1976-01-01

    The first two stages of a six stage liquid oxygen turbine were tested in water. One and two stage performance was determined for one shrouded and two unshrouded blade end configurations over ranges of clearance and blade-jet speed ratio. First stage, two stage, and second stage efficiencies are included as well as the effect of clearance on mass flow for two stage operation.

  12. Managing for efficiency in health care: the case of Greek public hospitals.

    PubMed

    Mitropoulos, Panagiotis; Mitropoulos, Ioannis; Sissouras, Aris

    2013-12-01

    This paper evaluates the efficiency of public hospitals with two alternative conceptual models. One model targets resource usage directly to assess production efficiency, while the other model incorporates financial results to assess economic efficiency. Performance analysis of these models was conducted in two stages. In stage one, we utilized data envelopment analysis to obtain the efficiency score of each hospital, while in stage two we took into account the influence of the operational environment on efficiency by regressing those scores on explanatory variables that concern the performance of hospital services. We applied these methods to evaluate 96 general hospitals in the Greek national health system. The results indicate that, although the average efficiency scores in both models have remained relatively stable compared to past assessments, internal changes in hospital performances do exist. This study provides a clear framework for policy implications to increase the overall efficiency of general hospitals.

  13. A more rational approach to new-product development.

    PubMed

    Bonabeau, Eric; Bodick, Neil; Armstrong, Robert W

    2008-03-01

    Companies often treat new-product development as a monolithic process, but it can be more rationally divided into two parts: an early stage that focuses on evaluating prospects and eliminating bad bets, and a late stage that maximizes the remaining candidates' market potential. Recognizing the value of this approach, Eli Lilly designed and piloted Chorus, an autonomous unit dedicated solely to the early stage. This article demonstrates how segmenting development in this way can speed it up and make it more cost-effective. Two classes of decision-making errors can impede NPD, the authors say. First, managers often ignore evidence challenging their assumptions that projects will succeed. As a result, many projects go forward despite multiple red flags; some even reach the market, only to fail dramatically after their introduction. Second, companies sometimes terminate projects prematurely because people fail to conduct the right experiments to reveal products' potential. Most companies promote both kinds of errors by focusing disproportionately on late-stage development; they lack the early, truth-seeking functions that would head such errors off. In segmented NPD, however, the early-stage organization maintains loyalty to the experiment rather than the product, whereas the late-stage organization pursues commercial success. Chorus has significantly improved NPD efficiency and productivity at Lilly. Although the unit absorbs just one-tenth of Lilly's investment in early-stage development, it delivers a substantially greater fraction of the molecules slated for late Phase II trials--at almost twice the speed and less than a third of the cost of the standard process, sometimes shaving as much as two years off the usual development time.

  14. Ventricular beat classifier using fractal number clustering.

    PubMed

    Bakardjian, H

    1992-09-01

    A two-stage ventricular beat 'associative' classification procedure is described. The first stage separates typical beats from extrasystoles on the basis of area and polarity rules. At the second stage, the extrasystoles are classified in self-organised cluster formations of adjacent shape parameter values. This approach avoids the use of threshold values for discrimination between ectopic beats of different shapes, which could be critical in borderline cases. A pattern shape feature conventionally called a 'fractal number', in combination with a polarity attribute, was found to be a good criterion for waveform evaluation. An additional advantage of this pattern classification method is its good computational efficiency, which affords the opportunity to implement it in real-time systems.

  15. Mathematical models for optimization of the centrifugal stage of a refrigerating compressor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuzhdin, A.S.

    1987-09-01

    The authors describe a general approach to the creating of mathematical models of energy and head losses in the flow part of the centrifugal compressor. The mathematical model of the pressure head and efficiency of a two-section stage proposed in this paper is meant for determining its characteristics for the assigned geometric dimensions and for optimizing by variance calculations. Characteristic points on the plot of velocity distribution over the margin of the vanes of the impeller and the diffuser of the centrifugal stage with a combined diffuser are presented. To assess the reliability of the mathematical model the authors comparedmore » some calculated data with the experimental ones.« less

  16. A Multi-Stage Reverse Logistics Network Problem by Using Hybrid Priority-Based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu

    Today remanufacturing problem is one of the most important problems regarding to the environmental aspects of the recovery of used products and materials. Therefore, the reverse logistics is gaining become power and great potential for winning consumers in a more competitive context in the future. This paper considers the multi-stage reverse Logistics Network Problem (m-rLNP) while minimizing the total cost, which involves reverse logistics shipping cost and fixed cost of opening the disassembly centers and processing centers. In this study, we first formulate the m-rLNP model as a three-stage logistics network model. Following for solving this problem, we propose a Genetic Algorithm pri (GA) with priority-based encoding method consisting of two stages, and introduce a new crossover operator called Weight Mapping Crossover (WMX). Additionally also a heuristic approach is applied in the 3rd stage to ship of materials from processing center to manufacturer. Finally numerical experiments with various scales of the m-rLNP models demonstrate the effectiveness and efficiency of our approach by comparing with the recent researches.

  17. Stochastic Multi-Commodity Facility Location Based on a New Scenario Generation Technique

    NASA Astrophysics Data System (ADS)

    Mahootchi, M.; Fattahi, M.; Khakbazan, E.

    2011-11-01

    This paper extends two models for stochastic multi-commodity facility location problem. The problem is formulated as two-stage stochastic programming. As a main point of this study, a new algorithm is applied to efficiently generate scenarios for uncertain correlated customers' demands. This algorithm uses Latin Hypercube Sampling (LHS) and a scenario reduction approach. The relation between customer satisfaction level and cost are considered in model I. The risk measure using Conditional Value-at-Risk (CVaR) is embedded into the optimization model II. Here, the structure of the network contains three facility layers including plants, distribution centers, and retailers. The first stage decisions are the number, locations, and the capacity of distribution centers. In the second stage, the decisions are the amount of productions, the volume of transportation between plants and customers.

  18. Two-axis tracking using translation stages for a lens-to-channel waveguide solar concentrator.

    PubMed

    Liu, Yuxiao; Huang, Ran; Madsen, Christi K

    2014-10-20

    A two-axis tracking scheme designed for <250x concentration realized by a single-axis mechanical tracker and a translation stage is discussed. The translation stage is used for adjusting positions for seasonal sun movement. It has two-dimensional x-y tracking instead of horizontal movement x-only. This tracking method is compatible with planar waveguide solar concentrators. A prototype system with 50x concentration shows >75% optical efficiency throughout the year in simulation and >65% efficiency experimentally. This efficiency can be further improved by the use of anti-reflection layers and a larger waveguide refractive index.

  19. Two-Stage Parameter Estimation in Confined Costal Aquifers

    NASA Astrophysics Data System (ADS)

    Hsu, N.

    2003-12-01

    Using field observations of tidal level and piezometric head at an observation well, this research develops a two-stage parameter estimation approach for estimating the hydraulic conductivity (T) and storage coefficient (S) of a confined aquifer in a costal area. While the y-axis coincides with the coastline, the x-axis extends from zero to infinity and, therefore, the domain of the aquifer is assumed to be a half plane. Other assumptions include homogeneity, isotropy and constant thickness of the aquifer, and zero initial head distribution. In the first stage, fluctuations of the tidal level and piezometric head at the observation well are collected simultaneously without the influence of pumping. Fourier spectra analysis is used to find the autocorrelation and crosscorrelation of the two sets of observations as well as the phase vs. frequency function. The tidal efficiency and time delay can then be computed. The analytical solution of Ferris (1951) is then used to compute the ratio of T/S. In the second stage, the system is stressed with pumping and observations of the tidal level and piezometric head at the observation well are collected simultaneously. The effect of tide to the observation well without pumping can be computed by the analytical solution of Ferris (1951) based upon the identified ratio of T/S and is deducted from the piezometric head observations to obtain the updated piezometric head. Theis equation coupled with the method of image is then applied to the updated piezometric head to obtain the T and S values. The developed approach is applied to a hypothetical aquifer. The results obtained show convergence of the approach. The robustness of the developed approach is also demonstrated by using noise-corrupted observations.

  20. Two-stage, low noise advanced technology fan. 4: Aerodynamic final report

    NASA Technical Reports Server (NTRS)

    Harley, K. G.; Keenan, M. J.

    1975-01-01

    A two-stage research fan was tested to provide technology for designing a turbofan engine for an advanced, long range commercial transport having a cruise Mach number of 0.85 -0.9 and a noise level 20 EPNdB below current requirements. The fan design tip speed was 365.8m/sec (1200ft/sec);the hub/tip ratio was 0.4; the design pressure ratio was 1.9; and the design specific flow was 209.2 kg/sec/sq m(42.85lbm/sec/sq ft). Two fan-versions were tested: a baseline configuration, and an acoustically treated configuration with a sonic inlet device. The baseline version was tested with uniform inlet flow and with tip-radial and hub-radial inlet flow distortions. The baseline fan with uniform inlet flow attained an efficiency of 86.4% at design speed, but the stall margin was low. Tip-radial distortion increased stall margin 4 percentage points at design speed and reduced peak efficiency one percentage point. Hub-radial distortion decreased stall margin 4 percentage points at all speeds and reduced peak efficiency at design speed 8 percentage points. At design speed, the sonic inlet in the cruise position reduced stall margin one percentage point and efficiency 1.5 to 4.5 percentage points. The sonic inlet in the approach position reduced stall margin 2 percentage points.

  1. Achievement of ultrahigh solar concentration with potential for efficient laser pumping.

    PubMed

    Gleckman, P

    1988-11-01

    Measurements are reported of the irradiance produced by a two-stage solar concentrator designed to approach the thermodynamic limit. Sunlight is collected by a 40.6-cm diam parabolic primary which forms a 0.98-cm diam image. The image is reconcentrated by a nonimaging refracting secondary with index n = 1.53 to a final aperture 1.27 mm in diameter. Thus the geometrical concentration ratio is 102, 000. The highest irradiance value achieved was 4.4 +/- 0.2 kW cm(-2), or 56,000 +/- 5000 suns, relative to a solar disk insolation of 800 W m(-2). This is greater than the previous peak solar irradiance record by nearly a factor of 3, and it is 68% of that existing at the solar surface itself. The efficiency with which we concentrated 55 W of sunlight to a small spot suggests that our two-stage system would be an excellent candidate for solar pumping of solid state lasers.

  2. Collaboration with Pharma Will Introduce Nanotechnologies in Early Stage Drug Development | FNLCR Staging

    Cancer.gov

    The Frederick National Lab has begun to assist several major pharmaceutical companies in adopting nanotechnologies in early stage drug development, when the approach is most efficient and cost-effective. For some time, the national lab’s Nanotechno

  3. Quantum image encryption based on restricted geometric and color transformations

    NASA Astrophysics Data System (ADS)

    Song, Xian-Hua; Wang, Shen; Abd El-Latif, Ahmed A.; Niu, Xia-Mu

    2014-08-01

    A novel encryption scheme for quantum images based on restricted geometric and color transformations is proposed. The new strategy comprises efficient permutation and diffusion properties for quantum image encryption. The core idea of the permutation stage is to scramble the codes of the pixel positions through restricted geometric transformations. Then, a new quantum diffusion operation is implemented on the permutated quantum image based on restricted color transformations. The encryption keys of the two stages are generated by two sensitive chaotic maps, which can ensure the security of the scheme. The final step, measurement, is built by the probabilistic model. Experiments conducted on statistical analysis demonstrate that significant improvements in the results are in favor of the proposed approach.

  4. Time-varying SMART design and data analysis methods for evaluating adaptive intervention effects.

    PubMed

    Dai, Tianjiao; Shete, Sanjay

    2016-08-30

    In a standard two-stage SMART design, the intermediate response to the first-stage intervention is measured at a fixed time point for all participants. Subsequently, responders and non-responders are re-randomized and the final outcome of interest is measured at the end of the study. To reduce the side effects and costs associated with first-stage interventions in a SMART design, we proposed a novel time-varying SMART design in which individuals are re-randomized to the second-stage interventions as soon as a pre-fixed intermediate response is observed. With this strategy, the duration of the first-stage intervention will vary. We developed a time-varying mixed effects model and a joint model that allows for modeling the outcomes of interest (intermediate and final) and the random durations of the first-stage interventions simultaneously. The joint model borrows strength from the survival sub-model in which the duration of the first-stage intervention (i.e., time to response to the first-stage intervention) is modeled. We performed a simulation study to evaluate the statistical properties of these models. Our simulation results showed that the two modeling approaches were both able to provide good estimations of the means of the final outcomes of all the embedded interventions in a SMART. However, the joint modeling approach was more accurate for estimating the coefficients of first-stage interventions and time of the intervention. We conclude that the joint modeling approach provides more accurate parameter estimates and a higher estimated coverage probability than the single time-varying mixed effects model, and we recommend the joint model for analyzing data generated from time-varying SMART designs. In addition, we showed that the proposed time-varying SMART design is cost-efficient and equally effective in selecting the optimal embedded adaptive intervention as the standard SMART design.

  5. Real-time energy-saving metro train rescheduling with primary delay identification

    PubMed Central

    Li, Keping; Schonfeld, Paul

    2018-01-01

    This paper aims to reschedule online metro trains in delay scenarios. A graph representation and a mixed integer programming model are proposed to formulate the optimization problem. The solution approach is a two-stage optimization method. In the first stage, based on a proposed train state graph and system analysis, the primary and flow-on delays are specifically analyzed and identified with a critical path algorithm. For the second stage a hybrid genetic algorithm is designed to optimize the schedule, with the delay identification results as input. Then, based on the infrastructure data of Beijing Subway Line 4 of China, case studies are presented to demonstrate the effectiveness and efficiency of the solution approach. The results show that the algorithm can quickly and accurately identify primary delays among different types of delays. The economic cost of energy consumption and total delay is considerably reduced (by more than 10% in each case). The computation time of the Hybrid-GA is low enough for rescheduling online. Sensitivity analyses further demonstrate that the proposed approach can be used as a decision-making support tool for operators. PMID:29474471

  6. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  7. Spectrum Access In Cognitive Radio Using a Two-Stage Reinforcement Learning Approach

    NASA Astrophysics Data System (ADS)

    Raj, Vishnu; Dias, Irene; Tholeti, Thulasi; Kalyani, Sheetal

    2018-02-01

    With the advent of the 5th generation of wireless standards and an increasing demand for higher throughput, methods to improve the spectral efficiency of wireless systems have become very important. In the context of cognitive radio, a substantial increase in throughput is possible if the secondary user can make smart decisions regarding which channel to sense and when or how often to sense. Here, we propose an algorithm to not only select a channel for data transmission but also to predict how long the channel will remain unoccupied so that the time spent on channel sensing can be minimized. Our algorithm learns in two stages - a reinforcement learning approach for channel selection and a Bayesian approach to determine the optimal duration for which sensing can be skipped. Comparisons with other learning methods are provided through extensive simulations. We show that the number of sensing is minimized with negligible increase in primary interference; this implies that lesser energy is spent by the secondary user in sensing and also higher throughput is achieved by saving on sensing.

  8. Performance of two-stage fan with larger dampers on first-stage rotor

    NASA Technical Reports Server (NTRS)

    Urasek, D. C.; Cunnan, W. S.; Stevans, W.

    1979-01-01

    The performance of a two stage, high pressure-ratio fan, having large, part-span vibration dampers on the first stage rotor is presented and compared with an identical aerodynamically designed fan having smaller dampers. Comparisons of the data for the two damper configurations show that with increased damper size: (1) very high losses in the damper region reduced overall efficiency of first stage rotor by approximately 3 points, (2) the overall performance of each blade row, downstream of the damper was not significantly altered, although appreciable differences in the radial distributions of various performance parameters were noted, and (3) the lower performance of the first stage rotor decreased the overall fan efficiency more than 1 percentage point.

  9. Collaboration with Pharma Will Introduce Nanotechnologies in Early Stage Drug Development | Poster

    Cancer.gov

    The Frederick National Lab has begun to assist several major pharmaceutical companies in adopting nanotechnologies in early stage drug development, when the approach is most efficient and cost-effective.

  10. An adaptive two-stage sequential design for sampling rare and clustered populations

    USGS Publications Warehouse

    Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.

    2008-01-01

    How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.

  11. EVALUATION OF A TWO-STAGE PASSIVE TREATMENT APPROACH FOR MINING INFLUENCE WATERS

    EPA Science Inventory

    A two-stage passive treatment approach was assessed at bench-scale using two Colorado Mining Influenced Waters (MIWs). The first-stage was a limestone drain with the purpose of removing iron and aluminum and mitigating the potential effects of mineral acidity. The second stage w...

  12. Unmanned Aerial Vehicles for Alien Plant Species Detection and Monitoring

    NASA Astrophysics Data System (ADS)

    Dvořák, P.; Müllerová, J.; Bartaloš, T.; Brůna, J.

    2015-08-01

    Invasive species spread rapidly and their eradication is difficult. New methods enabling fast and efficient monitoring are urgently needed for their successful control. Remote sensing can improve early detection of invading plants and make their management more efficient and less expensive. In an ongoing project in the Czech Republic, we aim at developing innovative methods of mapping invasive plant species (semi-automatic detection algorithms) by using purposely designed unmanned aircraft (UAV). We examine possibilities for detection of two tree and two herb invasive species. Our aim is to establish fast, repeatable and efficient computer-assisted method of timely monitoring, reducing the costs of extensive field campaigns. For finding the best detection algorithm we test various classification approaches (object-, pixel-based and hybrid). Thanks to its flexibility and low cost, UAV enables assessing the effect of phenological stage and spatial resolution, and is most suitable for monitoring the efficiency of eradication efforts. However, several challenges exist in UAV application, such as geometrical and radiometric distortions, high amount of data to be processed and legal constrains for the UAV flight missions over urban areas (often highly invaded). The newly proposed UAV approach shall serve invasive species researchers, management practitioners and policy makers.

  13. Energy-Saving Control of a Novel Hydraulic Drive System for Field Walking Robot

    NASA Astrophysics Data System (ADS)

    Fang, Delei; Shang, Jianzhong; Xue, Yong; Yang, Junhong; Wang, Zhuo

    2018-01-01

    To improve the efficiency of the hydraulic drive system in field walking robot, this paper proposed a novel hydraulic system based on two-stage pressure source. Based on the analysis of low efficiency of robot single-stage hydraulic system, the paper firstly introduces the concept and design of two-stage pressure source drive system. Then, the new hydraulic system energy-saving control is planned according to the characteristics of walking robot. The feasibility of the new hydraulic system is proved by the simulation of the walking robot squatting. Finally, the efficiencies of two types hydraulic system are calculated, indicating that the novel hydraulic system can increase the efficiency by 41.5%, which can contribute to enhance knowledge about hydraulic drive system for field walking robot.

  14. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE PAGES

    Lin, Fu; Leyffer, Sven; Munson, Todd

    2016-04-12

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  15. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Fu; Leyffer, Sven; Munson, Todd

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  16. Efficient Spatio-Temporal Local Binary Patterns for Spontaneous Facial Micro-Expression Recognition

    PubMed Central

    Wang, Yandan; See, John; Phan, Raphael C.-W.; Oh, Yee-Hui

    2015-01-01

    Micro-expression recognition is still in the preliminary stage, owing much to the numerous difficulties faced in the development of datasets. Since micro-expression is an important affective clue for clinical diagnosis and deceit analysis, much effort has gone into the creation of these datasets for research purposes. There are currently two publicly available spontaneous micro-expression datasets—SMIC and CASME II, both with baseline results released using the widely used dynamic texture descriptor LBP-TOP for feature extraction. Although LBP-TOP is popular and widely used, it is still not compact enough. In this paper, we draw further inspiration from the concept of LBP-TOP that considers three orthogonal planes by proposing two efficient approaches for feature extraction. The compact robust form described by the proposed LBP-Six Intersection Points (SIP) and a super-compact LBP-Three Mean Orthogonal Planes (MOP) not only preserves the essential patterns, but also reduces the redundancy that affects the discriminality of the encoded features. Through a comprehensive set of experiments, we demonstrate the strengths of our approaches in terms of recognition accuracy and efficiency. PMID:25993498

  17. Two-Stage Design Method for Enhanced Inductive Energy Transmission with Q-Constrained Planar Square Loops.

    PubMed

    Eteng, Akaa Agbaeze; Abdul Rahim, Sharul Kamal; Leow, Chee Yen; Chew, Beng Wah; Vandenbosch, Guy A E

    2016-01-01

    Q-factor constraints are usually imposed on conductor loops employed as proximity range High Frequency Radio Frequency Identification (HF-RFID) reader antennas to ensure adequate data bandwidth. However, pairing such low Q-factor loops in inductive energy transmission links restricts the link transmission performance. The contribution of this paper is to assess the improvement that is reached with a two-stage design method, concerning the transmission performance of a planar square loop relative to an initial design, without compromise to a Q-factor constraint. The first stage of the synthesis flow is analytical in approach, and determines the number and spacing of turns by which coupling between similar paired square loops can be enhanced with low deviation from the Q-factor limit presented by an initial design. The second stage applies full-wave electromagnetic simulations to determine more appropriate turn spacing and widths to match the Q-factor constraint, and achieve improved coupling relative to the initial design. Evaluating the design method in a test scenario yielded a more than 5% increase in link transmission efficiency, as well as an improvement in the link fractional bandwidth by more than 3%, without violating the loop Q-factor limit. These transmission performance enhancements are indicative of a potential for modifying proximity HF-RFID reader antennas for efficient inductive energy transfer and data telemetry links.

  18. Survival strategies in semi-arid climate for isohydric and anisohydric species

    NASA Astrophysics Data System (ADS)

    Guerin, M. F.; Gentine, P.; Uriarte, M.

    2013-12-01

    The understanding of survival strategies in dry land remains a challenging problem aiming at the interrelationship between local hydrology, plant physiology and climate. Carbon starvation and hydraulic failure are thought to be the two main factors leading to drought-induced mortality beside biotic perturbation. In order to better comprehend mortality the understanding of abiotic mechanisms triggering mortality is being studied in a tractable model for soil-plant-atmosphere continuum emphasizing the role of soil hydraulic properties, photosynthesis, embolism, leaf-gas exchange and climate. In particular the role of the frequency vs. the intensity of droughts is highlighted within such model. The analysis of the model included a differentiation between isohydric and anisohydric tree regulation and is supported by an extensive dataset of Pinion and Juniper growing in a semi-arid ecosystem. An objective of reduced number of parameters was approached with allometric equations to characterize tree's main traits and their hydraulic controls. Leaf area, sapwood area and tree's height are used to derive capacitance, conductance and photosynthetic abilities of the plant. A parameter sensitivity is performed highlighting the role of root:shoot ratio, rooting depth, photosynthetic capacity, quantum efficiency, and most importantly water use efficiency. Analytic development emphasizes two regimes of transpiration/photosynthesis denoted as stage-I (no embolism) and stage-II (embolism dominated) in analogy with stage I-stage II treminology for evaporation (Phillip,1957). Anisohydric species tend to remain in stage-I during which they still can assimilate carbon at full potential thus avoiding carbon starvation. Isohydric species tend to remain longer in stage-II. The effects of drought intensity/frequency on those 2 stages are described. Figure: sensitivity of Piñons stage 1 (top left), stage 2 (top right), and total cavitation duration (sum of stage 1 and stage 2 - bottom left) and time to carbon starvation (defined as 0-crossover of NSC content - bottom right) to Leaf Area Index (LAI) and root:shoot area.

  19. Design and Experimental Performance of a Two Stage Partial Admission Turbine, Task B.1/B.4

    NASA Technical Reports Server (NTRS)

    Sutton, R. F.; Boynton, J. L.; Akian, R. A.; Shea, Dan; Roschak, Edmund; Rojas, Lou; Orr, Linsey; Davis, Linda; King, Brad; Bubel, Bill

    1992-01-01

    A three-inch mean diameter, two-stage turbine with partial admission in each stage was experimentally investigated over a range of admissions and angular orientations of admission arcs. Three configurations were tested in which first stage admission varied from 37.4 percent (10 of 29 passages open, 5 per side) to 6.9 percent (2 open, 1 per side). Corresponding second stage admissions were 45.2 percent (14 of 31 passages open, 7 per side) and 12.9 percent (4 open, 2 per side). Angular positions of the second stage admission arcs with respect to the first stage varied over a range of 70 degrees. Design and off-design efficiency and flow characteristics for the three configurations are presented. The results indicated that peak efficiency and the corresponding isentropic velocity ratio decreased as the arcs of admission were decreased. Both efficiency and flow characteristics were sensitive to the second stage nozzle orientation angles.

  20. High-efficiency concentration/multi-solar-cell system for orbital power generation

    NASA Technical Reports Server (NTRS)

    Onffroy, J. R.; Stoltzmann, D. E.; Lin, R. J. H.; Knowles, G. R.

    1980-01-01

    An analysis was performed to determine the economic feasibility of a concentrating spectrophotovoltaic orbital electrical power generation system. In this system dichroic beam-splitting mirrors are used to divide the solar spectrum into several wavebands. Absorption of these wavebands by solar cells with matched energy bandgaps increases the cell efficiency while decreasing the amount of heat which must be rejected. The optical concentration is performed in two stages. The first concentration stage employs a Cassegrain-type telescope, resulting in a short system length. The output from this stage is directed to compound parabolic concentrators which comprise the second stage of concentration. Ideal efficiencies for one-, two-, three-, and four-cell systems were calculated under 1000 sun, AMO conditions, and optimum energy bands were determined. Realistic efficiencies were calculated for various combinations of Si, GaAs, Ge and GaP. Efficiencies of 32 to 33 percent were obtained with the multicell systems. The optimum system consists of an f/3.5 optical system, a beam splitter to divide the spectrum at 0.9 microns, and two solar cell arrays, GaAs and Si.

  1. Emergency department injury surveillance and aetiological research: bridging the gap with the two-stage case-control study design.

    PubMed

    Hagel, Brent E

    2011-04-01

    To provide an overview of the two-stage case-control study design and its potential application to ED injury surveillance data and to apply this approach to published ED data on the relation between brain injury and bicycle helmet use. Relevant background is presented on injury aetiology and case-control methodology with extension to the two-stage case-control design in the context of ED injury surveillance. The design is then applied to data from a published case-control study of the relation between brain injury and bicycle helmet use with motor vehicle involvement considered as a potential confounder. Taking into account the additional sampling at the second stage, the adjusted and corrected odds ratio and 95% confidence interval for the brain injury-helmet use relation is presented and compared with the estimate from the entire original dataset. Contexts where the two-stage case-control study design might be most appropriately applied to ED injury surveillance data are suggested. The adjusted odds ratio for the relation between brain injury and bicycle helmet use based on all data (n = 2833) from the original study was 0.34 (95% CI 0.25 to 0.46) compared with an estimate from a two-stage case-control design of 0.35 (95% CI 0.25 to 0.48) using only a fraction of the original subjects (n = 480). Application of the two-stage case-control study design to ED injury surveillance data has the potential to dramatically reduce study time and resource costs with acceptable losses in statistical efficiency.

  2. Comparative Analysis of Combined (First Anterior, Then Posterior) Versus Only Posterior Approach for Treating Severe Scoliosis

    PubMed Central

    Hero, Nikša; Vengust, Rok; Topolovec, Matevž

    2017-01-01

    Study Design. A retrospective, one center, institutional review board approved study. Objective. Two methods of operative treatments were compared in order to evaluate whether a two-stage approach is justified for correction of bigger idiopathic scoliosis curves. Two stage surgery, combined anterior approach in first operation and posterior instrumentation and correction in the second operation. One stage surgery included only posterior instrumentation and correction. Summary of Background Data. Studies comparing two-stage approach and only posterior approach are rather scarce, with shorter follow up and lack of clinical data. Methods. Three hundred forty eight patients with idiopathic scoliosis were operated using Cotrel–Dubousset (CD) hybrid instrumentation with pedicle screw and hooks. Only patients with curvatures more than or equal to 61° were analyzed and divided in two groups: two stage surgery (N = 30) and one stage surgery (N = 46). The radiographic parameters as well as duration of operation, hospitalization time, and number of segments included in fusion and clinical outcome were analyzed. Results. No statistically significant difference was observed in correction between two-stage group (average correction 69%) and only posterior approach group (average correction 66%). However, there were statistically significant differences regarding hospitalization time, duration of the surgery, and the number of instrumented segments. Conclusion. Two-stage surgery has only a limited advantage in terms of postoperative correction angle compared with the posterior approach. Posterior instrumentation and correction is satisfactory, especially taking into account that the patient is subjected to only one surgery. Level of Evidence: 3 PMID:28125525

  3. Cycle Analysis of Two-stage Planar SOFC Power Generation by Series Connection of Low and High Temperature SOFCs

    NASA Astrophysics Data System (ADS)

    Ohba, Takahiro; Takezawa, Shinya; Araki, Takuto; Onda, Kazuo; Sakaki, Yoshinori

    Solid Oxide Fuel Cell (SOFC) can be composed by solid components, and high power generation efficiency of a whole cycle is obtained by using high temperature exhaust heat for fuel reforming and bottoming power generation. Recently, the low temperature SOFC, which runs in the temperature range of around 600°C or above, has been developed with the high efficiency of power generation. On the other hand, multi-stage power generation system has been proposed by the United States DOE. In this study, a power generation system of two-stage SOFC by series connection of low and high temperature SOFCs has been studied. Overpotential data for low-temperature SOFC used in this study are based on recent published data, and those for high temperature SOFC arhaihe based on our previous study. The analytical results show the two-stage SOFC power generation efficiency of 50.3% and the total power generation efficiency of 56.1% under a standard operating condition.

  4. Atom-Economical Dimerization Strategy by the Rhodium-Catalyzed Addition of Carboxylic Acids to Allenes: Protecting-Group-Free Synthesis of Clavosolide A and Late-Stage Modification.

    PubMed

    Haydl, Alexander M; Breit, Bernhard

    2015-12-14

    Natural products of polyketide origin with a high level of symmetry, in particular C2 -symmetric diolides as a special macrolactone-based product class, often possess a broad spectrum of biological activity. An efficient route to this important structural motif was developed as part of a concise and highly convergent synthesis of clavosolide A. This strategy features an atom-economic "head-to-tail" dimerization by the stereoselective rhodium-catalyzed addition of carboxylic acids to terminal allenes with the simultaneous construction of two new stereocenters. The excellent efficiency and selectivity with which the C2 -symmetric core structures were obtained are remarkable considering the outcome under classical dimerization conditions. Furthermore, this approach facilitates late-stage modification and provides ready access to potential new lead structures. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Measuring the performance of Internet companies using a two-stage data envelopment analysis model

    NASA Astrophysics Data System (ADS)

    Cao, Xiongfei; Yang, Feng

    2011-05-01

    In exploring the business operation of Internet companies, few researchers have used data envelopment analysis (DEA) to evaluate their performance. Since the Internet companies have a two-stage production process: marketability and profitability, this study employs a relational two-stage DEA model to assess the efficiency of the 40 dot com firms. The results show that our model performs better in measuring efficiency, and is able to discriminate the causes of inefficiency, thus helping business management to be more effective through providing more guidance to business performance improvement.

  6. Two-stage fan. 4: Performance data for stator setting angle optimization

    NASA Technical Reports Server (NTRS)

    Burger, G. D.; Keenan, M. J.

    1975-01-01

    Stator setting angle optimization tests were conducted on a two-stage fan to improve efficiency at overspeed, stall margin at design speed, and both efficiency and stall margin at partspeed. The fan has a design pressure ratio of 2.8, a flow rate of 184.2 lb/sec (83.55 kg/sec) and a 1st-stage rotor tip speed of 1450 ft/sec (441.96 in/sec). Performance was obtained at 70,100, and 105 percent of design speed with different combinations of 1st-stage and 2nd-stage stator settings. One combination of settings, other than design, was common to all three speeds. At design speed, a 2.0 percentage point increase in stall margin was obtained at the expense of a 1.3 percentage point efficiency decrease. At 105 percent speed, efficiency was improved by 1.8 percentage points but stall margin decreased 4.7 percentage points. At 70 percent speed, no change in stall margin or operating line efficiency was obtained with stator resets although considerable speed-flow requlation occurred.

  7. An alternative approach based on artificial neural networks to study controlled drug release.

    PubMed

    Reis, Marcus A A; Sinisterra, Rubén D; Belchior, Jadson C

    2004-02-01

    An alternative methodology based on artificial neural networks is proposed to be a complementary tool to other conventional methods to study controlled drug release. Two systems are used to test the approach; namely, hydrocortisone in a biodegradable matrix and rhodium (II) butyrate complexes in a bioceramic matrix. Two well-established mathematical models are used to simulate different release profiles as a function of fundamental properties; namely, diffusion coefficient (D), saturation solubility (C(s)), drug loading (A), and the height of the device (h). The models were tested, and the results show that these fundamental properties can be predicted after learning the experimental or model data for controlled drug release systems. The neural network results obtained after the learning stage can be considered to quantitatively predict ideal experimental conditions. Overall, the proposed methodology was shown to be efficient for ideal experiments, with a relative average error of <1% in both tests. This approach can be useful for the experimental analysis to simulate and design efficient controlled drug-release systems. Copyright 2004 Wiley-Liss, Inc. and the American Pharmacists Association

  8. A spectral approach for discrete dislocation dynamics simulations of nanoindentation

    NASA Astrophysics Data System (ADS)

    Bertin, Nicolas; Glavas, Vedran; Datta, Dibakar; Cai, Wei

    2018-07-01

    We present a spectral approach to perform nanoindentation simulations using three-dimensional nodal discrete dislocation dynamics. The method relies on a two step approach. First, the contact problem between an indenter of arbitrary shape and an isotropic elastic half-space is solved using a spectral iterative algorithm, and the contact pressure is fully determined on the half-space surface. The contact pressure is then used as a boundary condition of the spectral solver to determine the resulting stress field produced in the simulation volume. In both stages, the mechanical fields are decomposed into Fourier modes and are efficiently computed using fast Fourier transforms. To further improve the computational efficiency, the method is coupled with a subcycling integrator and a special approach is devised to approximate the displacement field associated with surface steps. As a benchmark, the method is used to compute the response of an elastic half-space using different types of indenter. An example of a dislocation dynamics nanoindentation simulation with complex initial microstructure is presented.

  9. Descent Assisted Split Habitat Lunar Lander Concept

    NASA Technical Reports Server (NTRS)

    Mazanek, Daniel D.; Goodliff, Kandyce; Cornelius, David M.

    2008-01-01

    The Descent Assisted Split Habitat (DASH) lunar lander concept utilizes a disposable braking stage for descent and a minimally sized pressurized volume for crew transport to and from the lunar surface. The lander can also be configured to perform autonomous cargo missions. Although a braking-stage approach represents a significantly different operational concept compared with a traditional two-stage lander, the DASH lander offers many important benefits. These benefits include improved crew egress/ingress and large-cargo unloading; excellent surface visibility during landing; elimination of the need for deep-throttling descent engines; potentially reduced plume-surface interactions and lower vertical touchdown velocity; and reduced lander gross mass through efficient mass staging and volume segmentation. This paper documents the conceptual study on various aspects of the design, including development of sortie and outpost lander configurations and a mission concept of operations; the initial descent trajectory design; the initial spacecraft sizing estimates and subsystem design; and the identification of technology needs

  10. Theory and tests of two-phase turbines

    NASA Technical Reports Server (NTRS)

    Elliott, D. G.

    1982-01-01

    A theoretical model for two-phase turbines was developed. Apparatus was constructed for testing one- and two-stage turbines (using speed decrease from stage to stage). Turbines were tested with water and nitrogen mixtures and refrigerant 22. Nozzle efficiencies were 0.78 (measured) and 0.72 (theoretical) for water and nitrogen mixtures at a water/nitrogen mixture ratio of 68, by mass; and 0.89 (measured) and 0.84 (theoretical) for refrigerant 22 expanding from 0.02 quality to 0.28 quality. Blade efficiencies (shaft power before windage and bearing loss divided by nozzle jet power) were 0.63 (measured) and 0.71 (theoretical) for water and nitrogen mixtures and 0.62 (measured) and 0.63 (theoretical) for refrigerant 22 with a single stage turbine, and 0,70 (measured) and 0.85 (theoretical) for water and nitrogen mixtures with a two-stage turbine.

  11. Enabling Remote Health-Caring Utilizing IoT Concept over LTE-Femtocell Networks.

    PubMed

    Hindia, M N; Rahman, T A; Ojukwu, H; Hanafi, E B; Fattouh, A

    2016-01-01

    As the enterprise of the "Internet of Things" is rapidly gaining widespread acceptance, sensors are being deployed in an unrestrained manner around the world to make efficient use of this new technological evolution. A recent survey has shown that sensor deployments over the past decade have increased significantly and has predicted an upsurge in the future growth rate. In health-care services, for instance, sensors are used as a key technology to enable Internet of Things oriented health-care monitoring systems. In this paper, we have proposed a two-stage fundamental approach to facilitate the implementation of such a system. In the first stage, sensors promptly gather together the particle measurements of an android application. Then, in the second stage, the collected data are sent over a Femto-LTE network following a new scheduling technique. The proposed scheduling strategy is used to send the data according to the application's priority. The efficiency of the proposed technique is demonstrated by comparing it with that of well-known algorithms, namely, proportional fairness and exponential proportional fairness.

  12. Enabling Remote Health-Caring Utilizing IoT Concept over LTE-Femtocell Networks

    PubMed Central

    Hindia, M. N.; Rahman, T. A.; Ojukwu, H.; Hanafi, E. B.; Fattouh, A.

    2016-01-01

    As the enterprise of the “Internet of Things” is rapidly gaining widespread acceptance, sensors are being deployed in an unrestrained manner around the world to make efficient use of this new technological evolution. A recent survey has shown that sensor deployments over the past decade have increased significantly and has predicted an upsurge in the future growth rate. In health-care services, for instance, sensors are used as a key technology to enable Internet of Things oriented health-care monitoring systems. In this paper, we have proposed a two-stage fundamental approach to facilitate the implementation of such a system. In the first stage, sensors promptly gather together the particle measurements of an android application. Then, in the second stage, the collected data are sent over a Femto-LTE network following a new scheduling technique. The proposed scheduling strategy is used to send the data according to the application’s priority. The efficiency of the proposed technique is demonstrated by comparing it with that of well-known algorithms, namely, proportional fairness and exponential proportional fairness. PMID:27152423

  13. A Dual-Stage Two-Phase Model of Selective Attention

    ERIC Educational Resources Information Center

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  14. Comparison of Two Multidisciplinary Optimization Strategies for Launch-Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, R. D.; Powell, R. W.; Lepsch, R. A.; Stanley, D. O.; Kroo, I. M.

    1995-01-01

    The investigation focuses on development of a rapid multidisciplinary analysis and optimization capability for launch-vehicle design. Two multidisciplinary optimization strategies in which the analyses are integrated in different manners are implemented and evaluated for solution of a single-stage-to-orbit launch-vehicle design problem. Weights and sizing, propulsion, and trajectory issues are directly addressed in each optimization process. Additionally, the need to maintain a consistent vehicle model across the disciplines is discussed. Both solution strategies were shown to obtain similar solutions from two different starting points. These solutions suggests that a dual-fuel, single-stage-to-orbit vehicle with a dry weight of approximately 1.927 x 10(exp 5)lb, gross liftoff weight of 2.165 x 10(exp 6)lb, and length of 181 ft is attainable. A comparison of the two approaches demonstrates that treatment or disciplinary coupling has a direct effect on optimization convergence and the required computational effort. In comparison with the first solution strategy, which is of the general form typically used within the launch vehicle design community at present, the second optimization approach is shown to he 3-4 times more computationally efficient.

  15. A two-stage approach for fully automatic segmentation of venous vascular structures in liver CT images

    NASA Astrophysics Data System (ADS)

    Kaftan, Jens N.; Tek, Hüseyin; Aach, Til

    2009-02-01

    The segmentation of the hepatic vascular tree in computed tomography (CT) images is important for many applications such as surgical planning of oncological resections and living liver donations. In surgical planning, vessel segmentation is often used as basis to support the surgeon in the decision about the location of the cut to be performed and the extent of the liver to be removed, respectively. We present a novel approach to hepatic vessel segmentation that can be divided into two stages. First, we detect and delineate the core vessel components efficiently with a high specificity. Second, smaller vessel branches are segmented by a robust vessel tracking technique based on a medialness filter response, which starts from the terminal points of the previously segmented vessels. Specifically, in the first phase major vessels are segmented using the globally optimal graphcuts algorithm in combination with foreground and background seed detection, while the computationally more demanding tracking approach needs to be applied only locally in areas of smaller vessels within the second stage. The method has been evaluated on contrast-enhanced liver CT scans from clinical routine showing promising results. In addition to the fully-automatic instance of this method, the vessel tracking technique can also be used to easily add missing branches/sub-trees to an already existing segmentation result by adding single seed-points.

  16. Efficient removal of lignin with the maintenance of hemicellulose from kenaf by two-stage pretreatment process.

    PubMed

    Wan Azelee, Nur Izyan; Md Jahim, Jamaliah; Rabu, Amir; Abdul Murad, Abdul Munir; Abu Bakar, Farah Diba; Md Illias, Rosli

    2014-01-01

    The enhancement of lignocellulose hydrolysis using enzyme complexes requires an efficient pretreatment process to obtain susceptible conditions for the enzyme attack. This study focuses on removing a major part of the lignin layer from kenaf (Hibiscus cannabinus) while simultaneously maintaining most of the hemicellulose. A two-stage pretreatment process is adopted using calcium hydroxide, Ca(OH)₂, and peracetic acid, PPA, to break the recalcitrant lignin layer from other structural polysaccharides. An experimental screening of several pretreatment chemicals, concentrations, temperatures and solid-liquid ratios enabled the production of an optimally designed pretreatment process for kenaf. Our results showed that the pretreatment process has provide 59.25% lignin removal while maintaining 87.72% and 96.17% hemicellulose and cellulose, respectively, using 1g of Ca(OH)₂/L and a 8:1 (mL:g) ratio of liquid-Ca(OH)₂ at 50 °C for 1.5 h followed by 20% peracetic acid pretreatment at 75 °C for 2 h. These results validate this mild approach for aiding future enzymatic hydrolysis. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Reducing Bottlenecks to Improve the Efficiency of the Lung Cancer Care Delivery Process: A Process Engineering Modeling Approach to Patient-Centered Care.

    PubMed

    Ju, Feng; Lee, Hyo Kyung; Yu, Xinhua; Faris, Nicholas R; Rugless, Fedoria; Jiang, Shan; Li, Jingshan; Osarogiagbon, Raymond U

    2017-12-01

    The process of lung cancer care from initial lesion detection to treatment is complex, involving multiple steps, each introducing the potential for substantial delays. Identifying the steps with the greatest delays enables a focused effort to improve the timeliness of care-delivery, without sacrificing quality. We retrospectively reviewed clinical events from initial detection, through histologic diagnosis, radiologic and invasive staging, and medical clearance, to surgery for all patients who had an attempted resection of a suspected lung cancer in a community healthcare system. We used a computer process modeling approach to evaluate delays in care delivery, in order to identify potential 'bottlenecks' in waiting time, the reduction of which could produce greater care efficiency. We also conducted 'what-if' analyses to predict the relative impact of simulated changes in the care delivery process to determine the most efficient pathways to surgery. The waiting time between radiologic lesion detection and diagnostic biopsy, and the waiting time from radiologic staging to surgery were the two most critical bottlenecks impeding efficient care delivery (more than 3 times larger compared to reducing other waiting times). Additionally, instituting surgical consultation prior to cardiac consultation for medical clearance and decreasing the waiting time between CT scans and diagnostic biopsies, were potentially the most impactful measures to reduce care delays before surgery. Rigorous computer simulation modeling, using clinical data, can provide useful information to identify areas for improving the efficiency of care delivery by process engineering, for patients who receive surgery for lung cancer.

  18. Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure

    USGS Publications Warehouse

    Salehi, M.; Smith, D.R.

    2005-01-01

    Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.

  19. A framework for performance measurement in university using extended network data envelopment analysis (DEA) structures

    NASA Astrophysics Data System (ADS)

    Kashim, Rosmaini; Kasim, Maznah Mat; Rahman, Rosshairy Abd

    2015-12-01

    Measuring university performance is essential for efficient allocation and utilization of educational resources. In most of the previous studies, performance measurement in universities emphasized the operational efficiency and resource utilization without investigating the university's ability to fulfill the needs of its stakeholders and society. Therefore, assessment of the performance of university should be separated into two stages namely efficiency and effectiveness. In conventional DEA analysis, a decision making unit (DMU) or in this context, a university is generally treated as a black-box which ignores the operation and interdependence of the internal processes. When this happens, the results obtained would be misleading. Thus, this paper suggest an alternative framework for measuring the overall performance of a university by incorporating both efficiency and effectiveness and applies network DEA model. The network DEA models are recommended because this approach takes into account the interrelationship between the processes of efficiency and effectiveness in the system. This framework also focuses on the university structure which is expanded from the hierarchical to form a series of horizontal relationship between subordinate units by assuming both intermediate unit and its subordinate units can generate output(s). Three conceptual models are proposed to evaluate the performance of a university. An efficiency model is developed at the first stage by using hierarchical network model. It is followed by an effectiveness model which take output(s) from the hierarchical structure at the first stage as a input(s) at the second stage. As a result, a new overall performance model is proposed by combining both efficiency and effectiveness models. Thus, once this overall model is realized and utilized, the university's top management can determine the overall performance of each unit more accurately and systematically. Besides that, the result from the network DEA model can give a superior benchmarking power over the conventional models.

  20. Efficient two-stage dual-beam noncollinear optical parametric amplifier

    NASA Astrophysics Data System (ADS)

    Cheng, Yu-Hsiang; Gao, Frank Y.; Poulin, Peter R.; Nelson, Keith A.

    2018-06-01

    We have constructed a noncollinear optical parametric amplifier with two signal beams amplified in the same nonlinear crystal. This dual-beam design is more energy-efficient than operating two amplifiers in parallel. The cross-talk between two beams has been characterized and discussed. We have also added a second amplification stage to enhance the output of one of the arms, which is then frequency-doubled for ultraviolet generation. This single device provides two tunable sources for ultrafast spectroscopy in the ultraviolet and visible region.

  1. Assembler: Efficient Discovery of Spatial Co-evolving Patterns in Massive Geo-sensory Data.

    PubMed

    Zhang, Chao; Zheng, Yu; Ma, Xiuli; Han, Jiawei

    2015-08-01

    Recent years have witnessed the wide proliferation of geo-sensory applications wherein a bundle of sensors are deployed at different locations to cooperatively monitor the target condition. Given massive geo-sensory data, we study the problem of mining spatial co-evolving patterns (SCPs), i.e ., groups of sensors that are spatially correlated and co-evolve frequently in their readings. SCP mining is of great importance to various real-world applications, yet it is challenging because (1) the truly interesting evolutions are often flooded by numerous trivial fluctuations in the geo-sensory time series; and (2) the pattern search space is extremely large due to the spatiotemporal combinatorial nature of SCP. In this paper, we propose a two-stage method called Assembler. In the first stage, Assembler filters trivial fluctuations using wavelet transform and detects frequent evolutions for individual sensors via a segment-and-group approach. In the second stage, Assembler generates SCPs by assembling the frequent evolutions of individual sensors. Leveraging the spatial constraint, it conceptually organizes all the SCPs into a novel structure called the SCP search tree, which facilitates the effective pruning of the search space to generate SCPs efficiently. Our experiments on both real and synthetic data sets show that Assembler is effective, efficient, and scalable.

  2. Efficiency Management in Spaceflight Systems

    NASA Technical Reports Server (NTRS)

    Murphy, Karen

    2016-01-01

    Efficiency in spaceflight is often approached as “faster, better, cheaper – pick two”. The high levels of performance and reliability required for each mission suggest that planners can only control for two of the three. True efficiency comes by optimizing a system across all three parameters. The functional processes of spaceflight become technical requirements on three operational groups during mission planning: payload, vehicle, and launch operations. Given the interrelationships among the functions performed by the operational groups, optimizing function resources from one operational group to the others affects the efficiency of those groups and therefore the mission overall. This paper helps outline this framework and creates a context in which to understand the effects of resource trades on the overall system, improving the efficiency of the operational groups and the mission as a whole. This allows insight into and optimization of the controlling factors earlier in the mission planning stage.

  3. Multi-site precipitation downscaling using a stochastic weather generator

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Chen, Hua; Guo, Shenglian

    2018-03-01

    Statistical downscaling is an efficient way to solve the spatiotemporal mismatch between climate model outputs and the data requirements of hydrological models. However, the most commonly-used downscaling method only produces climate change scenarios for a specific site or watershed average, which is unable to drive distributed hydrological models to study the spatial variability of climate change impacts. By coupling a single-site downscaling method and a multi-site weather generator, this study proposes a multi-site downscaling approach for hydrological climate change impact studies. Multi-site downscaling is done in two stages. The first stage involves spatially downscaling climate model-simulated monthly precipitation from grid scale to a specific site using a quantile mapping method, and the second stage involves the temporal disaggregating of monthly precipitation to daily values by adjusting the parameters of a multi-site weather generator. The inter-station correlation is specifically considered using a distribution-free approach along with an iterative algorithm. The performance of the downscaling approach is illustrated using a 10-station watershed as an example. The precipitation time series derived from the National Centers for Environment Prediction (NCEP) reanalysis dataset is used as the climate model simulation. The precipitation time series of each station is divided into 30 odd years for calibration and 29 even years for validation. Several metrics, including the frequencies of wet and dry spells and statistics of the daily, monthly and annual precipitation are used as criteria to evaluate the multi-site downscaling approach. The results show that the frequencies of wet and dry spells are well reproduced for all stations. In addition, the multi-site downscaling approach performs well with respect to reproducing precipitation statistics, especially at monthly and annual timescales. The remaining biases mainly result from the non-stationarity of NCEP precipitation. Overall, the proposed approach is efficient for generating multi-site climate change scenarios that can be used to investigate the spatial variability of climate change impacts on hydrology.

  4. Deconvolution of mixing time series on a graph

    PubMed Central

    Blocker, Alexander W.; Airoldi, Edoardo M.

    2013-01-01

    In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135

  5. A critical review on factors influencing fermentative hydrogen production.

    PubMed

    Kothari, Richa; Kumar, Virendra; Pathak, Vinayak V; Ahmad, Shamshad; Aoyi, Ochieng; Tyagi, V V

    2017-03-01

    Biohydrogen production by dark fermentation of different waste materials is a promising approach to produce bio-energy in terms of renewable energy exploration. This communication has reviewed various influencing factors of dark fermentation process with detailed account of determinants in biohydrogen production. It has also focused on different factors such as improved bacterial strain, reactor design, metabolic engineering and two stage processes to enhance the bioenergy productivity from substrate. The study also suggest that complete utilization of substrates for biological hydrogen production requires the concentrated research and development for efficient functioning of microorganism with integrated application for energy production and bioremediation. Various studies have been taken into account here, to show the comparative efficiency of different substrates and operating conditions with inhibitory factors and pretreatment option for biohydrogen production. The study reveals that an extensive research is needed to observe field efficiency of process using low cost substrates and integration of dark and photo fermentation process. Integrated approach of fermentation process will surely compete with conventional hydrogen process and replace it completely in future.

  6. Performance characteristics of a slagging gasifier for MHD combustor systems

    NASA Technical Reports Server (NTRS)

    Smith, K. O.

    1979-01-01

    The performance of a two stage, coal combustor concept for magnetohydrodynamic (MHD) systems was investigated analytically. The two stage MHD combustor is comprised of an entrained flow, slagging gasifier as the first stage, and a gas phase reactor as the second stage. The first stage was modeled by assuming instantaneous coal devolatilization, and volatiles combustion and char gasification by CO2 and H2O in plug flow. The second stage combustor was modeled assuming adiabatic instantaneous gas phase reactions. Of primary interest was the dependence of char gasification efficiency on first stage particle residence time. The influence of first stage stoichiometry, heat loss, coal moisture, coal size distribution, and degree of coal devolatilization on gasifier performance and second stage exhaust temperature was determined. Performance predictions indicate that particle residence times on the order of 500 msec would be required to achieve gasification efficiencies in the range of 90 to 95 percent. The use of a finer coal size distribution significantly reduces the required gasifier residence time for acceptable levels of fuel use efficiency. Residence time requirements are also decreased by increased levels of coal devolatilization. Combustor design efforts should maximize devolatilization by minimizing mixing times associated with coal injection.

  7. Tests of a 2-Stage, Axial-Flow, 2-Phase Turbine

    NASA Technical Reports Server (NTRS)

    Elliott, D. G.

    1982-01-01

    A two phase flow turbine with two stages of axial flow impulse rotors was tested with three different working fluid mixtures at a shaft power of 30 kW. The turbine efficiency was 0.55 with nitrogen and water of 0.02 quality and 94 m/s velocity, 0.57 with Refrigerant 22 of 0.27 quality and 123 m/s velocity, and 0.30 with steam and water of 0.27 quality and 457 m/s velocity. The efficiencies with nitrogen and water and Refrigerant 22 were 86 percent of theoretical. At that fraction of theoretical, the efficiencies of optimized two phase turbines would be in the low 60 percent range with organic working fluids and in the mid 50 percent range with steam and water. The recommended turbine design is a two stage axial flow impulse turbine followed by a rotary separator for discharge of separate liquid and gas streams and recovery of liquid pressure.

  8. Efficient and stable production of Modified Vaccinia Ankara virus in two-stage semi-continuous and in continuous stirred tank cultivation systems.

    PubMed

    Tapia, Felipe; Jordan, Ingo; Genzel, Yvonne; Reichl, Udo

    2017-01-01

    One important aim in cell culture-based viral vaccine and vector production is the implementation of continuous processes. Such a development has the potential to reduce costs of vaccine manufacturing as volumetric productivity is increased and the manufacturing footprint is reduced. In this work, continuous production of Modified Vaccinia Ankara (MVA) virus was investigated. First, a semi-continuous two-stage cultivation system consisting of two shaker flasks in series was established as a small-scale approach. Cultures of the avian AGE1.CR.pIX cell line were expanded in the first shaker, and MVA virus was propagated and harvested in the second shaker over a period of 8-15 days. A total of nine small-scale cultivations were performed to investigate the impact of process parameters on virus yields. Harvest volumes of 0.7-1 L with maximum TCID50 titers of up to 1.0×109 virions/mL were obtained. Genetic analysis of control experiments using a recombinant MVA virus containing green-fluorescent-protein suggested that the virus was stable over at least 16 d of cultivation. In addition, a decrease or fluctuation of infectious units that may indicate an excessive accumulation of defective interfering particles was not observed. The process was automated in a two-stage continuous system comprising two connected 1 L stirred tank bioreactors. Stable MVA virus titers, and a total production volume of 7.1 L with an average TCID50 titer of 9×107 virions/mL was achieved. Because titers were at the lower range of the shake flask cultivations potential for further process optimization at large scale will be discussed. Overall, MVA virus was efficiently produced in continuous and semi-continuous cultivations making two-stage stirred tank bioreactor systems a promising platform for industrial production of MVA-derived recombinant vaccines and viral vectors.

  9. A two-stage path planning approach for multiple car-like robots based on PH curves and a modified harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun

    2017-11-01

    In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.

  10. Evaluation of a Web-Based App Demonstrating an Exclusionary Algorithmic Approach to TNM Cancer Staging

    PubMed Central

    2015-01-01

    Background TNM staging plays a critical role in the evaluation and management of a range of different types of cancers. The conventional combinatorial approach to the determination of an anatomic stage relies on the identification of distinct tumor (T), node (N), and metastasis (M) classifications to generate a TNM grouping. This process is inherently inefficient due to the need for scrupulous review of the criteria specified for each classification to ensure accurate assignment. An exclusionary approach to TNM staging based on sequential constraint of options may serve to minimize the number of classifications that need to be reviewed to accurately determine an anatomic stage. Objective Our aim was to evaluate the usability and utility of a Web-based app configured to demonstrate an exclusionary approach to TNM staging. Methods Internal medicine residents, surgery residents, and oncology fellows engaged in clinical training were asked to evaluate a Web-based app developed as an instructional aid incorporating (1) an exclusionary algorithm that polls tabulated classifications and sorts them into ranked order based on frequency counts, (2) reconfiguration of classification criteria to generate disambiguated yes/no questions that function as selection and exclusion prompts, and (3) a selectable grid of TNM groupings that provides dynamic graphic demonstration of the effects of sequentially selecting or excluding specific classifications. Subjects were asked to evaluate the performance of this app after completing exercises simulating the staging of different types of cancers encountered during training. Results Survey responses indicated high levels of agreement with statements supporting the usability and utility of this app. Subjects reported that its user interface provided a clear display with intuitive controls and that the exclusionary approach to TNM staging it demonstrated represented an efficient process of assignment that helped to clarify distinctions between tumor, node, and metastasis classifications. High overall usefulness ratings were bolstered by supplementary comments suggesting that this app might be readily adopted for use in clinical practice. Conclusions A Web-based app that utilizes an exclusionary algorithm to prompt the assignment of tumor, node, and metastasis classifications may serve as an effective instructional aid demonstrating an efficient and informative approach to TNM staging. PMID:28410163

  11. Stepwise sensitivity analysis from qualitative to quantitative: Application to the terrestrial hydrological modeling of a Conjunctive Surface-Subsurface Process (CSSP) land surface model

    NASA Astrophysics Data System (ADS)

    Gan, Yanjun; Liang, Xin-Zhong; Duan, Qingyun; Choi, Hyun Il; Dai, Yongjiu; Wu, Huan

    2015-06-01

    An uncertainty quantification framework was employed to examine the sensitivities of 24 model parameters from a newly developed Conjunctive Surface-Subsurface Process (CSSP) land surface model (LSM). The sensitivity analysis (SA) was performed over 18 representative watersheds in the contiguous United States to examine the influence of model parameters in the simulation of terrestrial hydrological processes. Two normalized metrics, relative bias (RB) and Nash-Sutcliffe efficiency (NSE), were adopted to assess the fit between simulated and observed streamflow discharge (SD) and evapotranspiration (ET) for a 14 year period. SA was conducted using a multiobjective two-stage approach, in which the first stage was a qualitative SA using the Latin Hypercube-based One-At-a-Time (LH-OAT) screening, and the second stage was a quantitative SA using the Multivariate Adaptive Regression Splines (MARS)-based Sobol' sensitivity indices. This approach combines the merits of qualitative and quantitative global SA methods, and is effective and efficient for understanding and simplifying large, complex system models. Ten of the 24 parameters were identified as important across different watersheds. The contribution of each parameter to the total response variance was then quantified by Sobol' sensitivity indices. Generally, parameter interactions contribute the most to the response variance of the CSSP, and only 5 out of 24 parameters dominate model behavior. Four photosynthetic and respiratory parameters are shown to be influential to ET, whereas reference depth for saturated hydraulic conductivity is the most influential parameter for SD in most watersheds. Parameter sensitivity patterns mainly depend on hydroclimatic regime, as well as vegetation type and soil texture. This article was corrected on 26 JUN 2015. See the end of the full text for details.

  12. Methodological approach to simulation and choice of ecologically efficient and energetically economic wind turbines (WT)

    NASA Astrophysics Data System (ADS)

    Bespalov, Vadim; Udina, Natalya; Samarskaya, Natalya

    2017-10-01

    Use of wind energy is related to one of the prospective directions among renewed energy sources. A methodological approach is reviewed in the article to simulation and choice of ecologically efficient and energetically economic wind turbines on the designing stage taking into account characteristics of natural-territorial complex and peculiarities of anthropogenic load in the territory of WT location.

  13. Removal of pharmaceuticals and organic matter from municipal wastewater using two-stage anaerobic fluidized membrane bioreactor.

    PubMed

    Dutta, Kasturi; Lee, Ming-Yi; Lai, Webber Wei-Po; Lee, Chien Hsien; Lin, Angela Yu-Chen; Lin, Cheng-Fang; Lin, Jih-Gaw

    2014-08-01

    The aim of present study was to treat municipal wastewater in two-stage anaerobic fluidized membrane bioreactor (AFMBR) (anaerobic fluidized bed reactor (AFBR) followed by AFMBR) using granular activated carbon (GAC) as carrier medium in both stages. Approximately 95% COD removal efficiency could be obtained when the two-stage AFMBR was operated at total HRT of 5h (2h for AFBR and 3h for AFMBR) and influent COD concentration of 250mg/L. About 67% COD and 99% TSS removal efficiency could be achieved by the system treating the effluent from primary clarifier of municipal wastewater treatment plant, at HRT of 1.28h and OLR of 5.65kg COD/m(3)d. The system could also effectively remove twenty detected pharmaceuticals in raw wastewaters with removal efficiency in the range of 86-100% except for diclofenac (78%). No other membrane fouling control was required except scouring effect of GAC for flux of 16LMH. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Collaboration with Pharma Will Introduce Nanotechnologies in Early Stage Drug Development | Frederick National Laboratory for Cancer Research

    Cancer.gov

    The Frederick National Lab has begun to assist several major pharmaceutical companies in adopting nanotechnologies in early stage drug development, when the approach is most efficient and cost-effective. For some time, the national lab’s Nanotechno

  15. Area Determination of Diabetic Foot Ulcer Images Using a Cascaded Two-Stage SVM-Based Classification.

    PubMed

    Wang, Lei; Pedersen, Peder C; Agu, Emmanuel; Strong, Diane M; Tulu, Bengisu

    2017-09-01

    The standard chronic wound assessment method based on visual examination is potentially inaccurate and also represents a significant clinical workload. Hence, computer-based systems providing quantitative wound assessment may be valuable for accurately monitoring wound healing status, with the wound area the best suited for automated analysis. Here, we present a novel approach, using support vector machines (SVM) to determine the wound boundaries on foot ulcer images captured with an image capture box, which provides controlled lighting and range. After superpixel segmentation, a cascaded two-stage classifier operates as follows: in the first stage, a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from superpixels that are used as input for each stage in the classifier training. Specifically, color and bag-of-word representations of local dense scale invariant feature transformation features are descriptors for ruling out irrelevant regions, and color and wavelet-based features are descriptors for distinguishing healthy tissue from wound regions. Finally, the detected wound boundary is refined by applying the conditional random field method. We have implemented the wound classification on a Nexus 5 smartphone platform, except for training which was done offline. Results are compared with other classifiers and show that our approach provides high global performance rates (average sensitivity = 73.3%, specificity = 94.6%) and is sufficiently efficient for a smartphone-based image analysis.

  16. Identifying subgroups of patients using latent class analysis: should we use a single-stage or a two-stage approach? A methodological study using a cohort of patients with low back pain.

    PubMed

    Nielsen, Anne Molgaard; Kent, Peter; Hestbaek, Lise; Vach, Werner; Kongsted, Alice

    2017-02-01

    Heterogeneity in patients with low back pain (LBP) is well recognised and different approaches to subgrouping have been proposed. Latent Class Analysis (LCA) is a statistical technique that is increasingly being used to identify subgroups based on patient characteristics. However, as LBP is a complex multi-domain condition, the optimal approach when using LCA is unknown. Therefore, this paper describes the exploration of two approaches to LCA that may help improve the identification of clinically relevant and interpretable LBP subgroups. From 928 LBP patients consulting a chiropractor, baseline data were used as input to the statistical subgrouping. In a single-stage LCA, all variables were modelled simultaneously to identify patient subgroups. In a two-stage LCA, we used the latent class membership from our previously published LCA within each of six domains of health (activity, contextual factors, pain, participation, physical impairment and psychology) (first stage) as the variables entered into the second stage of the two-stage LCA to identify patient subgroups. The description of the results of the single-stage and two-stage LCA was based on a combination of statistical performance measures, qualitative evaluation of clinical interpretability (face validity) and a subgroup membership comparison. For the single-stage LCA, a model solution with seven patient subgroups was preferred, and for the two-stage LCA, a nine patient subgroup model. Both approaches identified similar, but not identical, patient subgroups characterised by (i) mild intermittent LBP, (ii) recent severe LBP and activity limitations, (iii) very recent severe LBP with both activity and participation limitations, (iv) work-related LBP, (v) LBP and several negative consequences and (vi) LBP with nerve root involvement. Both approaches identified clinically interpretable patient subgroups. The potential importance of these subgroups needs to be investigated by exploring whether they can be identified in other cohorts and by examining their possible association with patient outcomes. This may inform the selection of a preferred LCA approach.

  17. «Surgery first» or two stage complex rehabilitation plan for patients with malocclusions.

    PubMed

    Andreishchev, A R; Kavrayskaya, A Yu; Nikolaev, A V

    2016-01-01

    The article considers stages of complex rehabilitation treatment plans of patients with bite anomalies. The study included 515 patients with various complex malocclusions. Two and conventional three stage treatment plans are described. The article suggests indications for the two stage treatment protocol. The evaluation of efficiency and stability of achieved treatment results obtained with a help of the system of quantitative analysis of dentooralfacial disorders is presented.

  18. DEVELOPMENT, TESTING, AND DEMONSTRATION OF AN OPTIMAL FINE COAL CLEANING CIRCUIT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steven R. Hadley; R. Mike Mishra; Michael Placha

    1999-01-27

    The objective of this project was to improve the efficiency of the fine coal froth flotation circuit in commercial coal preparation plants. The plant selected for this project, Cyprus Emerald Coal Preparation Plant, cleans 1200-1400 tph of Pittsburgh seam raw coal and uses conventional flotation cells to clean the minus 100-mesh size fraction. The amount of coal in this size fraction is approximately 80 tph with an average ash content of 35%. The project was carried out in two phases. In Phase I, four advanced flotation cells, i.e., a Jameson cell, an Outokumpu HG tank cell, an open column, andmore » a packed column cell, were subjected to bench-scale testing and demonstration. In Phase II, two of these flotation cells, the Jameson cell and the packed column, were subjected to in-plant, proof-of-concept (POC) pilot plant testing both individually and in two-stage combination in order to ascertain whether a two-stage circuit results in lower levelized production costs. The bench-scale results indicated that the Jameson cell and packed column cell would be amenable to the single- and two-stage flotation approach. POC tests using these cells determined that single-stage coal matter recovery (CMR) of 85% was possible with a product ash content of 5.5-7%. Two-stage operation resulted in a coal recovery of 90% with a clean coal ash content of 6-7.5%. This compares favorably with the plant flotation circuit recovery of 80% at a clean coal ash of 11%.« less

  19. Closing oil palm yield gaps among Indonesian smallholders through industry schemes, pruning, weeding and improved seeds

    PubMed Central

    Soliman, T.; Lim, F. K. S.; Lee, J. S. H.

    2016-01-01

    Oil palm production has led to large losses of valuable habitats for tropical biodiversity. Sparing of land for nature could in theory be attained if oil palm yields increased. The efficiency of oil palm smallholders is below its potential capacity, but the factors determining efficiency are poorly understood. We employed a two-stage data envelopment analysis approach to assess the influence of agronomic, supply chain and management factors on oil palm production efficiency in 190 smallholders in six villages in Indonesia. The results show that, on average, yield increases of 65% were possible and that fertilizer and herbicide use was excessive and inefficient. Adopting industry-supported scheme management practices, use of high-quality seeds and higher pruning and weeding rates were found to improve efficiency. Smallholder oil palm production intensification in Indonesia has the capacity to increase production by 26%, an equivalent of 1.75 million hectares of land. PMID:27853605

  20. Closing oil palm yield gaps among Indonesian smallholders through industry schemes, pruning, weeding and improved seeds.

    PubMed

    Soliman, T; Lim, F K S; Lee, J S H; Carrasco, L R

    2016-08-01

    Oil palm production has led to large losses of valuable habitats for tropical biodiversity. Sparing of land for nature could in theory be attained if oil palm yields increased. The efficiency of oil palm smallholders is below its potential capacity, but the factors determining efficiency are poorly understood. We employed a two-stage data envelopment analysis approach to assess the influence of agronomic, supply chain and management factors on oil palm production efficiency in 190 smallholders in six villages in Indonesia. The results show that, on average, yield increases of 65% were possible and that fertilizer and herbicide use was excessive and inefficient. Adopting industry-supported scheme management practices, use of high-quality seeds and higher pruning and weeding rates were found to improve efficiency. Smallholder oil palm production intensification in Indonesia has the capacity to increase production by 26%, an equivalent of 1.75 million hectares of land.

  1. Power Budget Analysis for High Altitude Airships

    NASA Technical Reports Server (NTRS)

    Choi, Sang H.; Elliott, James R.; King, Glen C.

    2006-01-01

    The High Altitude Airship (HAA) has various potential applications and mission scenarios that require onboard energy harvesting and power distribution systems. The energy source considered for the HAA s power budget is solar photon energy that allows the use of either photovoltaic (PV) cells or advanced thermoelectric (ATE) converters. Both PV cells and an ATE system utilizing high performance thermoelectric materials were briefly compared to identify the advantages of ATE for HAA applications in this study. The ATE can generate a higher quantity of harvested energy than PV cells by utilizing the cascaded efficiency of a three-staged ATE in a tandem mode configuration. Assuming that each stage of ATE material has the figure of merit of 5, the cascaded efficiency of a three-staged ATE system approaches the overall conversion efficiency greater than 60%. Based on this estimated efficiency, the configuration of a HAA and the power utility modules are defined.

  2. Compact, highly efficient, single-frequency 25W, 2051nm Tm fiber-based MOPA for CO2 trace-gas laser space transmitter

    NASA Astrophysics Data System (ADS)

    Engin, Doruk; Chuang, Ti; Litvinovitch, Slava; Storm, Mark

    2017-08-01

    Fibertek has developed and demonstrated an ideal high-power; low-risk; low-size, weight, and power (SWaP) 2051 nm laser design meeting the lidar requirements for satellite-based global measurement of carbon dioxide (CO2). The laser design provides a path to space for either a coherent lidar approach being developed by NASA Jet Propulsion Laboratory (JPL)1,2 or an Integrated Path Differential Lidar (IPDA) approach developed by Harris Corp using radio frequency (RF) modulation and being flown as part of a NASA Earth Venture Suborbital Mission—NASA's Atmospheric Carbon and Transport - America.3,4 The thulium (Tm) fiber laser amplifies a <500 kHz linewidth distributed feedback (DFB) laser up to 25 W average power in a polarization maintaining (PM) fiber. The design manages and suppresses all deleterious non-linear effects that can cause linewidth broadening or amplified spontaneous emission (ASE) and meets all lidar requirements. We believe the core laser components, architecture, and design margins can support a coherent or IPDA lidar 10-year space mission. With follow-on funding Fibertek can adapt an existing space-based Technology Readiness Level 6 (TRL-6), 20 W erbium fiber laser package for this Tm design and enable a near-term space mission with an electrical-to-optical (e-o) efficiency of <20%. A cladding-pumped PM Tm fiber-based amplifier optimized for high efficiency and high-power operation at 2051 nm is presented. The two-stage amplifier has been demonstrated to achieve 25 W average power and <16 dB polarization extinction ratio (PER) out of a single-mode PM fiber using a <500 kHz linewidth JPL DFB laser5-7 and 43 dB gain. The power amplifier's optical conversion efficiency is 53%. An internal efficiency of 58% is calculated after correcting for passive losses. The two-stage amplifier sustains its highly efficient operation for a temperature range of 5-40°C. The absence of stimulated Brillouin scattering (SBS) for the narrow linewidth amplification shows promise for further power scaling.

  3. Exprimental Results of the First Two Stages of an Advanced Transonic Core Compressor Under Isolated and Multi-Stage Conditions.

    NASA Technical Reports Server (NTRS)

    Prahst, Patricia S.; Kulkarni, Sameer; Sohn, Ki H.

    2015-01-01

    NASA's Environmentally Responsible Aviation (ERA) Program calls for investigation of the technology barriers associated with improved fuel efficiency for large gas turbine engines. Under ERA, the highly loaded core compressor technology program attempts to realize the fuel burn reduction goal by increasing overall pressure ratio of the compressor to increase thermal efficiency of the engine. Study engines with overall pressure ratio of 60 to 70 are now being investigated. This means that the high pressure compressor would have to almost double in pressure ratio while keeping a high level of efficiency. NASA and GE teamed to address this challenge by testing the first two stages of an advanced GE compressor designed to meet the requirements of a very high pressure ratio core compressor. Previous test experience of a compressor which included these front two stages indicated a performance deficit relative to design intent. Therefore, the current rig was designed to run in 1-stage and 2-stage configurations in two separate tests to assess whether the bow shock of the second rotor interacting with the upstream stage contributed to the unpredicted performance deficit, or if the culprit was due to interaction of rotor 1 and stator 1. Thus, the goal was to fully understand the stage 1 performance under isolated and multi-stage conditions, and additionally to provide a detailed aerodynamic data set for CFD validation. Full use was made of steady and unsteady measurement methods to understand fluid dynamics loss source mechanisms due to rotor shock interaction and endwall losses. This paper will present the description of the compressor test article and its measured performance and operability, for both the single stage and two stage configurations. We focus the paper on measurements at 97% corrected speed with design intent vane setting angles.

  4. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  5. Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.

    PubMed

    Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi

    2015-04-22

    Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.

  6. Workstations for people with disabilities: an example of a virtual reality approach

    PubMed Central

    Budziszewski, Paweł; Grabowski, Andrzej; Milanowicz, Marcin; Jankowski, Jarosław

    2016-01-01

    This article describes a method of adapting workstations for workers with motion disability using computer simulation and virtual reality (VR) techniques. A workstation for grinding spring faces was used as an example. It was adjusted for two people with a disabled right upper extremity. The study had two stages. In the first, a computer human model with a visualization of maximal arm reach and preferred workspace was used to develop a preliminary modification of a virtual workstation. In the second stage, an immersive VR environment was used to assess the virtual workstation and to add further modifications. All modifications were assessed by measuring the efficiency of work and the number of movements involved. The results of the study showed that a computer simulation could be used to determine whether a worker with a disability could access all important areas of a workstation and to propose necessary modifications. PMID:26651540

  7. The 20 GHz GaAs monolithic power amplifier module development

    NASA Technical Reports Server (NTRS)

    1984-01-01

    The development of a 20 GHz GaAs FET monlithic power amplifier module for advanced communication applications is described. Four-way power combing of four 0.6 W amplifier modules is used as the baseline approach. For this purpose, a monolithic four-way traveling-wave power divider/combiner was developed. Over a 20 GHz bandwidth (10 to 30 GHz), an insertion loss of no more than 1.2 dB was measured for a pair of back-to-back connected divider/combiners. Isolation between output ports is better than 20 dB, and VSWRs are better than 21:1. A distributed amplifier with six 300 micron gate width FETs and gate and drain transmission line tapers has been designed, fabricated, and evaluated for use as an 0.6 W module. This amplifier has achieved state-of-the-art results of 0.5 W output power with at least 4 dB gain across the entire 2 to 21 GHz frequency range. An output power of 2 W was achieved at a measurement frequency of 18 GHz when four distributed amplifiers were power-combined using a pair of traveling-wave divider/combiners. Another approach is the direct common-source cascading of three power FET stages. An output power of up to 2W with 12 dB gain and 20% power-added efficiency has been achieved with this approach (at 17 GHz). The linear gain was 14 dB at 1 W output. The first two stages of the three-stage amplifier have achieved an output power of 1.6 W with 9 dB gain and 26% power-added efficiency at 16 GHz.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiangjiang; Li, Weixuan; Lin, Guang

    In decision-making for groundwater management and contamination remediation, it is important to accurately evaluate the probability of the occurrence of a failure event. For small failure probability analysis, a large number of model evaluations are needed in the Monte Carlo (MC) simulation, which is impractical for CPU-demanding models. One approach to alleviate the computational cost caused by the model evaluations is to construct a computationally inexpensive surrogate model instead. However, using a surrogate approximation can cause an extra error in the failure probability analysis. Moreover, constructing accurate surrogates is challenging for high-dimensional models, i.e., models containing many uncertain input parameters.more » To address these issues, we propose an efficient two-stage MC approach for small failure probability analysis in high-dimensional groundwater contaminant transport modeling. In the first stage, a low-dimensional representation of the original high-dimensional model is sought with Karhunen–Loève expansion and sliced inverse regression jointly, which allows for the easy construction of a surrogate with polynomial chaos expansion. Then a surrogate-based MC simulation is implemented. In the second stage, the small number of samples that are close to the failure boundary are re-evaluated with the original model, which corrects the bias introduced by the surrogate approximation. The proposed approach is tested with a numerical case study and is shown to be 100 times faster than the traditional MC approach in achieving the same level of estimation accuracy.« less

  9. Design, construction and operation of a new filter approach for treatment of surface waters in Southeast Asia

    NASA Astrophysics Data System (ADS)

    Frankel, R. J.

    1981-05-01

    A simple, inexpensive, and efficient method of water treatment for rural communities in Southeast Asia was developed using local materials as filter media. The filter utilizes coconut fiber and burnt rice husks in a two-stage filtering process designed as a gravityfed system without the need for backwashing, and eliminates in most cases the need of any chemicals. The first-stage filter with coconut fiber acts essentially as a substitute for the coagulation and sedimentation phases of conventional water-treatment plants. The second-stage filter, using burnt rice husks, is similar to slow sand filtration with the additional benefits of taste, color and odor removals through the absorption properties of the activated carbon in the medium. This paper reports on the design, construction costs, and operating results of several village size units in Thailand and in the Philippines.

  10. Material efficiency studies for a Compton camera designed to measure characteristic prompt gamma rays emitted during proton beam radiotherapy

    PubMed Central

    Robertson, Daniel; Polf, Jerimy C; Peterson, Steve W; Gillin, Michael T; Beddar, Sam

    2011-01-01

    Prompt gamma rays emitted from biological tissues during proton irradiation carry dosimetric and spectroscopic information that can assist with treatment verification and provide an indication of the biological response of the irradiated tissues. Compton cameras are capable of determining the origin and energy of gamma rays. However, prompt gamma monitoring during proton therapy requires new Compton camera designs that perform well at the high gamma energies produced when tissues are bombarded with therapeutic protons. In this study we optimize the materials and geometry of a three-stage Compton camera for prompt gamma detection and calculate the theoretical efficiency of such a detector. The materials evaluated in this study include germanium, bismuth germanate (BGO), NaI, xenon, silicon and lanthanum bromide (LaBr3). For each material, the dimensions of each detector stage were optimized to produce the maximum number of relevant interactions. These results were used to predict the efficiency of various multi-material cameras. The theoretical detection efficiencies of the most promising multi-material cameras were then calculated for the photons emitted from a tissue-equivalent phantom irradiated by therapeutic proton beams ranging from 50 to 250 MeV. The optimized detector stages had a lateral extent of 10 × 10 cm2 with the thickness of the initial two stages dependent on the detector material. The thickness of the third stage was fixed at 10 cm regardless of material. The most efficient single-material cameras were composed of germanium (3 cm) and BGO (2.5 cm). These cameras exhibited efficiencies of 1.15 × 10−4 and 9.58 × 10−5 per incident proton, respectively. The most efficient multi-material camera design consisted of two initial stages of germanium (3 cm) and a final stage of BGO, resulting in a theoretical efficiency of 1.26 × 10−4 per incident proton. PMID:21508442

  11. Accounting for selection and correlation in the analysis of two-stage genome-wide association studies.

    PubMed

    Robertson, David S; Prevost, A Toby; Bowden, Jack

    2016-10-01

    The problem of selection bias has long been recognized in the analysis of two-stage trials, where promising candidates are selected in stage 1 for confirmatory analysis in stage 2. To efficiently correct for bias, uniformly minimum variance conditionally unbiased estimators (UMVCUEs) have been proposed for a wide variety of trial settings, but where the population parameter estimates are assumed to be independent. We relax this assumption and derive the UMVCUE in the multivariate normal setting with an arbitrary known covariance structure. One area of application is the estimation of odds ratios (ORs) when combining a genome-wide scan with a replication study. Our framework explicitly accounts for correlated single nucleotide polymorphisms, as might occur due to linkage disequilibrium. We illustrate our approach on the measurement of the association between 11 genetic variants and the risk of Crohn's disease, as reported in Parkes and others (2007. Sequence variants in the autophagy gene IRGM and multiple other replicating loci contribute to Crohn's disease susceptibility. Nat. Gen. 39: (7), 830-832.), and show that the estimated ORs can vary substantially if both selection and correlation are taken into account. © The Author 2016. Published by Oxford University Press.

  12. Two-Stage Bayesian Model Averaging in Endogenous Variable Models*

    PubMed Central

    Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.

    2013-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  13. High Voltage TAL Performance

    NASA Technical Reports Server (NTRS)

    Jacobson, David T.; Jankovsky, Robert S.; Rawlin, Vincent K.; Manzella, David H.

    2001-01-01

    The performance of a two-stage, anode layer Hall thruster was evaluated. Experiments were conducted in single and two-stage configurations. In single-stage configuration, the thruster was operated with discharge voltages ranging from 300 to 1700 V. Discharge specific impulses ranged from 1630 to 4140 sec. Thruster investigations were conducted with input power ranging from 1 to 8.7 kW, corresponding to power throttling of nearly 9: 1. An extensive two-stage performance map was generated. Data taken with total voltage (sum of discharge and accelerating voltage) constant revealed a decrease in thruster efficiency as the discharge voltage was increased. Anode specific impulse values were comparable in the single and two-stage configurations showing no strong advantage for two-stage operation.

  14. Thermal modelling approaches to enable mitigation measures implementation for salmonid gravel stages in hydropeaking rivers

    NASA Astrophysics Data System (ADS)

    Casas-Mulet, R.; Alfredsen, K. T.

    2016-12-01

    The dewatering of salmon spawning redds can lead to early life stages mortality due to hydropeaking operations, with higher impact on the alevins stages as they have lower tolerance to dewatering than the eggs. Targeted flow-related mitigations measures can reduce such mortality, but it is essential to understand how hydropeaking change thermal regimes in rivers and may impact embryo development; only then optimal measures can be implemented at the right development stage. We present a set of experimental approaches and modelling tools for the estimation of hatch and swim-up dates based on water temperature data in the river Lundesokna (Norway). We identified critical periods for gravel-stages survival and through comparing hydropeaking vs unregulated thermal and hydrological regimes, we established potential flow-release measures to minimise mortality. Modelling outcomes were then used assess the cost-efficiency of each measure. The combinations of modelling tools used in this study were overall satisfactory and their application can be useful especially in systems where little field data is available. Targeted measures built on well-informed modelling approaches can be pre-tested based on their efficiency to mitigate dewatering effects vs. the hydropower system capacity to release or conserve water for power production. Overall, environmental flow releases targeting specific ecological objectives can provide better cost-effective options than conventional operational rules complying with general legislation.

  15. Efficiency of primary care in rural Burkina Faso. A two-stage DEA analysis

    PubMed Central

    2011-01-01

    Background Providing health care services in Africa is hampered by severe scarcity of personnel, medical supplies and financial funds. Consequently, managers of health care institutions are called to measure and improve the efficiency of their facilities in order to provide the best possible services with their resources. However, very little is known about the efficiency of health care facilities in Africa and instruments of performance measurement are hardly applied in this context. Objective This study determines the relative efficiency of primary care facilities in Nouna, a rural health district in Burkina Faso. Furthermore, it analyses the factors influencing the efficiency of these institutions. Methodology We apply a two-stage Data Envelopment Analysis (DEA) based on data from a comprehensive provider and household information system. In the first stage, the relative efficiency of each institution is calculated by a traditional DEA model. In the second stage, we identify the reasons for being inefficient by regression technique. Results The DEA projections suggest that inefficiency is mainly a result of poor utilization of health care facilities as they were either too big or the demand was too low. Regression results showed that distance is an important factor influencing the efficiency of a health care institution Conclusions Compared to the findings of existing one-stage DEA analyses of health facilities in Africa, the share of relatively efficient units is slightly higher. The difference might be explained by a rather homogenous structure of the primary care facilities in the Burkina Faso sample. The study also indicates that improving the accessibility of primary care facilities will have a major impact on the efficiency of these institutions. Thus, health decision-makers are called to overcome the demand-side barriers in accessing health care. PMID:22828358

  16. [Four-stage gradual expanding approach of problem based learning in otorhinolaryngology].

    PubMed

    Kong, Weijia; Wang, Yanjun; Yue, Jianxin; Chen, Jianjun; Peng, Yixiang; Zhang, Sulin; Zhang, Xiaomeng

    2008-08-01

    The aim of the study is to cover the shortages of PBL, such as time-consuming, abstract, lacking of the course of clinic practice, and to introduce PBL to the teaching of otorhinolaryngology. By the improvement of the international classic teaching model of PBL, we put forward "four-stage gradual expanding approach of PBL" and establish "four-stage gradual expanding approach of PBL in otorhinolaryngology". Through the four stages of watching PBL, simulation PBL, internship PBL, practice PBL, we have accomplished the organic integration of theory teaching and clinical practice. This teaching method is more adaptive to the teaching of otorhinolaryngology, and it can help the medicine students to establish the whole concept of medicine and can stimulate them to form the good habits of self-regulated learning and life-long learning efficiently.

  17. Validation, Optimization and Simulation of a Solar Thermoelectric Generator Model

    NASA Astrophysics Data System (ADS)

    Madkhali, Hadi Ali; Hamil, Ali; Lee, HoSung

    2017-12-01

    This study explores thermoelectrics as a viable option for small-scale solar thermal applications. Thermoelectric technology is based on the Seebeck effect, which states that a voltage is induced when a temperature gradient is applied to the junctions of two differing materials. This research proposes to analyze, validate, simulate, and optimize a prototype solar thermoelectric generator (STEG) model in order to increase efficiency. The intent is to further develop STEGs as a viable and productive energy source that limits pollution and reduces the cost of energy production. An empirical study (Kraemer et al. in Nat Mater 10:532, 2011) on the solar thermoelectric generator reported a high efficiency performance of 4.6%. The system had a vacuum glass enclosure, a flat panel (absorber), thermoelectric generator and water circulation for the cold side. The theoretical and numerical approach of this current study validated the experimental results from Kraemer's study to a high degree. The numerical simulation process utilizes a two-stage approach in ANSYS software for Fluent and Thermal-Electric Systems. The solar load model technique uses solar radiation under AM 1.5G conditions in Fluent. This analytical model applies Dr. Ho Sung Lee's theory of optimal design to improve the performance of the STEG system by using dimensionless parameters. Applying this theory, using two cover glasses and radiation shields, the STEG model can achieve a highest efficiency of 7%.

  18. Thermodynamic analysis and economical evaluation of two 310-80 K pre-cooling stage configurations for helium refrigeration and liquefaction cycle

    NASA Astrophysics Data System (ADS)

    Zhu, Z. G.; Zhuang, M.; Jiang, Q. F.; Y Zhang, Q.; Feng, H. S.

    2017-12-01

    In 310-80 K pre-cooling stage, the temperature of the HP helium stream reduces to about 80 K where nearly 73% of the enthalpy drop from room temperature to 4.5 K occurs. Apart from the most common liquid nitrogen pre-cooling, another 310-80 K pre-cooling configuration with turbine is employed in some helium cryoplants. In this paper, thermodynamic and economical performance of these two kinds of 310-80 K pre-cooling stage configurations has been studied at different operating conditions taking discharge pressure, isentropic efficiency of turbines and liquefaction rate as independent parameters. The exergy efficiency, total UA of heat exchangers and operating cost of two configurations are computed. This work will provide a reference for choosing 310-80 K pre-cooling stage configuration during design.

  19. High altitude airship configuration and power technology and method for operation of same

    NASA Technical Reports Server (NTRS)

    Choi, Sang H. (Inventor); Elliott, Jr., James R. (Inventor); King, Glen C. (Inventor); Park, Yeonjoon (Inventor); Kim, Jae-Woo (Inventor); Chu, Sang-Hyon (Inventor)

    2011-01-01

    A new High Altitude Airship (HAA) capable of various extended applications and mission scenarios utilizing inventive onboard energy harvesting and power distribution systems. The power technology comprises an advanced thermoelectric (ATE) thermal energy conversion system. The high efficiency of multiple stages of ATE materials in a tandem mode, each suited for best performance within a particular temperature range, permits the ATE system to generate a high quantity of harvested energy for the extended mission scenarios. When the figure of merit 5 is considered, the cascaded efficiency of the three-stage ATE system approaches an efficiency greater than 60 percent.

  20. Design and performance of a 427-meter-per-second-tip-speed two-stage fan having a 2.40 pressure ratio

    NASA Technical Reports Server (NTRS)

    Cunnan, W. S.; Stevans, W.; Urasek, D. C.

    1978-01-01

    The aerodynamic design and the overall and blade-element performances are presented of a 427-meter-per-second-tip-speed two-stage fan designed with axially spaced blade rows to reduce noise transmitted upstream of the fan. At design speed the highest recorded adiabatic efficiency was 0.796 at a pressure of 2.30. Peak efficiency was not established at design speed because of a damper failure which terminated testing prematurely. The overall efficiencies, at 60 and 80 percent of design speed, peaked at approximately 0.83.

  1. A dual-stage sodium thermal electrochemical converter (Na-TEC)

    NASA Astrophysics Data System (ADS)

    Limia, Alexander; Ha, Jong Min; Kottke, Peter; Gunawan, Andrey; Fedorov, Andrei G.; Lee, Seung Woo; Yee, Shannon K.

    2017-12-01

    The sodium thermal electrochemical converter (Na-TEC) is a heat engine that generates electricity through the isothermal expansion of sodium ions. The Na-TEC is a closed system that can theoretically achieve conversion efficiencies above 45% when operating between thermal reservoirs at 1150 K and 550 K. However, thermal designs have confined previous single-stage devices to thermal efficiencies below 20%. To mitigate some of these limitations, we consider dividing the isothermal expansion into two stages; one at the evaporator temperature (1150 K) and another at an intermediate temperature (650 K-1050 K). This dual-stage Na-TEC takes advantage of regeneration and reheating, and could be amenable to better thermal management. Herein, we demonstrate how the dual-stage device can improve the efficiency by up to 8% points over the best performing single-stage device. We also establish an application regime map for the single- and dual-stage Na-TEC in terms of the power density and the total thermal parasitic loss. Generally, a single-stage Na-TEC should be used for applications requiring high power densities, whereas a dual-stage Na-TEC should be used for applications requiring high efficiency.

  2. Gene delivery through the use of a hyaluronate-associated intracellularly degradable cross-linked polyethyleneimine

    PubMed Central

    Xu, Peisheng; Quick, Griffin; Yeo, Yoon

    2009-01-01

    For a non-viral gene delivery system to be clinically effective, it should be non-toxic, compatible with biological components, and highly efficient in gene transfection. With this goal in mind, we investigated the gene delivery efficiency of a ternary complex consisting of DNA, an intracellularly degradable polycation, and sodium hyaluronate (DPH complex). Here, we report that the DPH ternary complex achieved significantly higher transfection efficiency than other polymer systems, especially in the presence of serum. The high transfection efficiency and serum tolerance of DPH are attributed to a unique interplay between CLPEI and HA, which leads to (i) the improved stability of DNA in the extracellular environment and at the early stage of intracellular trafficking and (ii) timely dissociation of the DNA-polymer complex. This study reinforces findings of earlier studies that emphasized each step as a bottleneck for efficient gene delivery; yet, it is the first to show that it is possible to overcome these obstacles simultaneously by taking advantage of two distinctive approaches. PMID:19631979

  3. Gyrotron multistage depressed collector based on E × B drift concept using azimuthal electric field. I. Basic design

    NASA Astrophysics Data System (ADS)

    Wu, Chuanren; Pagonakis, Ioannis Gr.; Avramidis, Konstantinos A.; Gantenbein, Gerd; Illy, Stefan; Thumm, Manfred; Jelonnek, John

    2018-03-01

    Multistage Depressed Collectors (MDCs) are widely used in vacuum tubes to regain energy from the depleted electron beam. However, the design of an MDC for gyrotrons, especially for those deployed in fusion experiments and future power plants, is not trivial. Since gyrotrons require relatively high magnetic fields, their hollow annular electron beam is magnetically confined in the collector. In such a moderate magnetic field, the MDC concept based on E × B drift is very promising. Several concrete design approaches based on the E × B concept have been proposed. This paper presents a realizable design of a two-stage depressed collector based on the E × B concept. A collector efficiency of 77% is achievable, which will be able to increase the total gyrotron efficiency from currently 50% to more than 60%. Secondary electrons reduce the efficiency only by 1%. Moreover, the collector efficiency is resilient to the change of beam current (i.e., space charge repulsion) and beam misalignment as well as magnetic field perturbations. Therefore, compared to other E × B conceptual designs, this design approach is promising and fairly feasible.

  4. SEMIPARAMETRIC ADDITIVE RISKS REGRESSION FOR TWO-STAGE DESIGN SURVIVAL STUDIES

    PubMed Central

    Li, Gang; Wu, Tong Tong

    2011-01-01

    In this article we study a semiparametric additive risks model (McKeague and Sasieni (1994)) for two-stage design survival data where accurate information is available only on second stage subjects, a subset of the first stage study. We derive two-stage estimators by combining data from both stages. Large sample inferences are developed. As a by-product, we also obtain asymptotic properties of the single stage estimators of McKeague and Sasieni (1994) when the semiparametric additive risks model is misspecified. The proposed two-stage estimators are shown to be asymptotically more efficient than the second stage estimators. They also demonstrate smaller bias and variance for finite samples. The developed methods are illustrated using small intestine cancer data from the SEER (Surveillance, Epidemiology, and End Results) Program. PMID:21931467

  5. Single-stage experimental evaluation of tandem-airfoil rotor and stator blading for compressors, part 8

    NASA Technical Reports Server (NTRS)

    Brent, J. A.; Clemmons, D. R.

    1974-01-01

    An experimental investigation was conducted with an 0.8 hub/tip ratio, single-stage, axial flow compressor to determine the potential of tandem-airfoil blading for improving the efficiency and stable operating range of compressor stages. The investigation included testing of a baseline stage with single-airfoil blading and two tandem-blade stages. The overall performance of the baseline stage and the tandem-blade stage with a 20-80% loading split was considerably below the design prediction. The other tandem-blade stage, which had a rotor with a 50-50% loading split, came within 4.5% of the design pressure rise (delta P(bar)/P(bar) sub 1) and matched the design stage efficiency. The baseline stage with single-airfoil blading, which was designed to account for the actual rotor inlet velocity profile and the effects of axial velocity ratio and secondary flow, achieved the design predicted performance. The corresponding tandem-blade stage (50-50% loading split in both blade rows) slightly exceeded the design pressure rise but was 1.5 percentage points low in efficiency. The tandem rotors tested during both phases demonstrated higher pressure rise and efficiency than the corresponding single-airfoil rotor, with identical inlet and exit airfoil angles.

  6. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction.

    PubMed

    Granovsky, Alexander A

    2015-12-21

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  7. Communication: An efficient approach to compute state-specific nuclear gradients for a generic state-averaged multi-configuration self consistent field wavefunction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Granovsky, Alexander A., E-mail: alex.granovsky@gmail.com

    We present a new, very efficient semi-numerical approach for the computation of state-specific nuclear gradients of a generic state-averaged multi-configuration self consistent field wavefunction. Our approach eliminates the costly coupled-perturbed multi-configuration Hartree-Fock step as well as the associated integral transformation stage. The details of the implementation within the Firefly quantum chemistry package are discussed and several sample applications are given. The new approach is routinely applicable to geometry optimization of molecular systems with 1000+ basis functions using a standalone multi-core workstation.

  8. Configuration of management accounting information system for multi-stage manufacturing

    NASA Astrophysics Data System (ADS)

    Mkrtychev, S. V.; Ochepovsky, A. V.; Enik, O. A.

    2018-05-01

    The article presents an approach to configuration of a management accounting information system (MAIS) that provides automated calculations and the registration of normative production losses in multi-stage manufacturing. The use of MAIS with the proposed configuration at the enterprises of textile and woodworking industries made it possible to increase the accuracy of calculations for normative production losses and to organize accounting thereof with the reference to individual stages of the technological process. Thus, high efficiency of multi-stage manufacturing control is achieved.

  9. Scalable pumping approach for extracting the maximum TEM(00) solar laser power.

    PubMed

    Liang, Dawei; Almeida, Joana; Vistas, Cláudia R

    2014-10-20

    A scalable TEM(00) solar laser pumping approach is composed of four pairs of first-stage Fresnel lens-folding mirror collectors, four fused-silica secondary concentrators with light guides of rectangular cross-section for radiation homogenization, four hollow two-dimensional compound parabolic concentrators for further concentration of uniform radiations from the light guides to a 3 mm diameter, 76 mm length Nd:YAG rod within four V-shaped pumping cavities. An asymmetric resonator ensures an efficient large-mode matching between pump light and oscillating laser light. Laser power of 59.1 W TEM(00) is calculated by ZEMAX and LASCAD numerical analysis, revealing 20 times improvement in brightness figure of merit.

  10. Cold-air investigation of a 4 1/2 stage turbine with stage-loading factor of 4.66 and high specific work output. 2: Stage group performance

    NASA Technical Reports Server (NTRS)

    Whitney, W. J.; Behning, F. P.; Moffitt, T. P.; Hotz, G. M.

    1980-01-01

    The stage group performance of a 4 1/2 stage turbine with an average stage loading factor of 4.66 and high specific work output was determined in cold air at design equivalent speed. The four stage turbine configuration produced design equivalent work output with an efficiency of 0.856; a barely discernible difference from the 0.855 obtained for the complete 4 1/2 stage turbine in a previous investigation. The turbine was designed and the procedure embodied the following design features: (1) controlled vortex flow, (2) tailored radial work distribution, and (3) control of the location of the boundary-layer transition point on the airfoil suction surface. The efficiency forecast for the 4 1/2 stage turbine was 0.886, and the value predicted using a reference method was 0.862. The stage group performance results were used to determine the individual stage efficiencies for the condition at which design 4 1/2 stage work output was obtained. The efficiencies of stages one and four were about 0.020 lower than the predicted value, that of stage two was 0.014 lower, and that of stage three was about equal to the predicted value. Thus all the stages operated reasonably close to their expected performance levels, and the overall (4 1/2 stage) performance was not degraded by any particularly inefficient component.

  11. Nutrients removal from undiluted cattle farm wastewater by the two-stage process of microalgae-based wastewater treatment.

    PubMed

    Lv, Junping; Liu, Yang; Feng, Jia; Liu, Qi; Nan, Fangru; Xie, Shulian

    2018-05-24

    Chlorella vulgaris was selected from five freshwater microalgal strains of Chlorophyta, and showed a good potential in nutrients removal from undiluted cattle farm wastewater. By the end of treatment, 62.30%, 81.16% and 85.29% of chemical oxygen demand (COD), ammonium (NH 4 + -N) and total phosphorus (TP) were removed. Then two two-stage processes were established to enhance nutrients removal efficiency for meeting the discharge standards of China. The process A was the biological treatment via C. vulgaris followed by the biological treatment via C. vulgaris, and the process B was the biological treatment via C. vulgaris followed by the activated carbon adsorption. After 3-5 d of treatment of wastewater via the two processes, the nutrients removal efficiency of COD, NH 4 + -N and TP were 91.24%-92.17%, 83.16%-94.27% and 90.98%-94.41%, respectively. The integrated two-stage process could strengthen nutrients removal efficiency from undiluted cattle farm wastewater with high organic substance and nitrogen concentration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Bronchoscopy with endobronchial ultrasound guided transbronchial needle aspiration vs. transthoracic needle aspiration in lung cancer diagnosis and staging.

    PubMed

    Munoz, Mark L; Lechtzin, Noah; Li, Qing Kay; Wang, KoPen; Yarmus, Lonny B; Lee, Hans J; Feller-Kopman, David J

    2017-07-01

    In evaluating patients with suspected lung cancer, it is important to not only obtain a tissue diagnosis, but also to obtain enough tissue for both histologic and molecular analysis in order to appropriately stage the patient with a safe and efficient strategy. The diagnostic approach may often be dependent on local resources and practice patterns rather than current guidelines. We Describe lung cancer staging at two large academic medical centers to identify the impact different procedural approaches have on patient outcomes. We conducted a retrospective cohort study of all patients undergoing a lung cancer diagnostic evaluation at two multidisciplinary centers during a 1-year period. Identifying complication rates and the need for multiple biopsies as our primary outcomes, we developed a multivariate regression model to determine features associated with complications and need for multiple biopsies. Of 830 patients, 285 patients were diagnosed with lung cancers during the study period. Those staged at the institution without an endobronchial ultrasound (EBUS) program were more likely to require multiple biopsies (OR 3.62, 95% CI: 1.71-7.67, P=0.001) and suffer complications associated with the diagnostic procedure (OR 10.2, 95% CI: 3.08-33.58, P<0.001). Initial staging with transthoracic needle aspiration (TTNA) and conventional bronchoscopy were associated with greater need for subsequent biopsies (OR 8.05 and 14.00, 95% CI: 3.43-18.87 and 5.17-37.86, respectively) and higher complication rates (OR 37.75 and 7.20, 95% CI: 10.33-137.96 and 1.36-37.98, respectively). Lung cancer evaluation at centers with a dedicated EBUS program results in fewer biopsies and complications than at multidisciplinary counterparts without an EBUS program.

  13. NASA/GE Energy Efficient Engine low pressure turbine scaled test vehicle performance report

    NASA Technical Reports Server (NTRS)

    Bridgeman, M. J.; Cherry, D. G.; Pedersen, J.

    1983-01-01

    The low pressure turbine for the NASA/General Electric Energy Efficient Engine is a highly loaded five-stage design featuring high outer wall slope, controlled vortex aerodynamics, low stage flow coefficient, and reduced clearances. An assessment of the performance of the LPT has been made based on a series of scaled air-turbine tests divided into two phases: Block 1 and Block 2. The transition duct and the first two stages of the turbine were evaluated during the Block 1 phase from March through August 1979. The full five-stage scale model, representing the final integrated core/low spool (ICLS) design and incorporating redesigns of stages 1 and 2 based on Block 1 data analysis, was tested as Block 2 in June through September 1981. Results from the scaled air-turbine tests, reviewed herein, indicate that the five-stage turbine designed for the ICLS application will attain an efficiency level of 91.5 percent at the Mach 0.8/10.67-km (35,000-ft), max-climb design point. This is relative to program goals of 91.1 percent for the ICLS and 91.7 percent for the flight propulsion system (FPS).

  14. An adaptive front tracking technique for three-dimensional transient flows

    NASA Astrophysics Data System (ADS)

    Galaktionov, O. S.; Anderson, P. D.; Peters, G. W. M.; van de Vosse, F. N.

    2000-01-01

    An adaptive technique, based on both surface stretching and surface curvature analysis for tracking strongly deforming fluid volumes in three-dimensional flows is presented. The efficiency and accuracy of the technique are demonstrated for two- and three-dimensional flow simulations. For the two-dimensional test example, the results are compared with results obtained using a different tracking approach based on the advection of a passive scalar. Although for both techniques roughly the same structures are found, the resolution for the front tracking technique is much higher. In the three-dimensional test example, a spherical blob is tracked in a chaotic mixing flow. For this problem, the accuracy of the adaptive tracking is demonstrated by the volume conservation for the advected blob. Adaptive front tracking is suitable for simulation of the initial stages of fluid mixing, where the interfacial area can grow exponentially with time. The efficiency of the algorithm significantly benefits from parallelization of the code. Copyright

  15. Performance Evaluation of Reduced-Chord Rotor Blading as Applied to J73 Two-Stage Turbine

    NASA Technical Reports Server (NTRS)

    Schurn, Harold J.

    1957-01-01

    The multistage turbine from the J73 turbojet engine has previously been investigated with standard and with reduced-chord rotor blading in order to determine the individual performance characteristics of each configuration over a range of over-all pressure ratio and speed. Because both turbine configurations exhibited peak efficiencies of over 90 percent, and because both units had relatively wide efficient operating ranges, it was considered of interest to determine the performance of the first stage of the turbine as a separate component. Accordingly, the standard-bladed multistage turbine was modified by removing the second-stage rotor disk and stator and altering the flow passage so that the first stage of the unit could be operated independently. The modified single-stage turbine was then operated over a range of stage pressure ratio and speed. The single-stage turbine operated at a peak brake internal efficiency of over 90 percent at an over-all stage pressure ratio of 1.4 and at 90 percent of design equivalent speed. Furthermore, the unit operated at high efficiencies over a relatively wide operating range. When the single-stage results were compared with the multistage results at the design operating point, it was found that the first stage produced approximately half the total multistage-turbine work output.

  16. Technical and scale efficiency in public and private Irish nursing homes - a bootstrap DEA approach.

    PubMed

    Ni Luasa, Shiovan; Dineen, Declan; Zieba, Marta

    2016-10-27

    This article provides methodological and empirical insights into the estimation of technical efficiency in the nursing home sector. Focusing on long-stay care and using primary data, we examine technical and scale efficiency in 39 public and 73 private Irish nursing homes by applying an input-oriented data envelopment analysis (DEA). We employ robust bootstrap methods to validate our nonparametric DEA scores and to integrate the effects of potential determinants in estimating the efficiencies. Both the homogenous and two-stage double bootstrap procedures are used to obtain confidence intervals for the bias-corrected DEA scores. Importantly, the application of the double bootstrap approach affords true DEA technical efficiency scores after adjusting for the effects of ownership, size, case-mix, and other determinants such as location, and quality. Based on our DEA results for variable returns to scale technology, the average technical efficiency score is 62 %, and the mean scale efficiency is 88 %, with nearly all units operating on the increasing returns to scale part of the production frontier. Moreover, based on the double bootstrap results, Irish nursing homes are less technically efficient, and more scale efficient than the conventional DEA estimates suggest. Regarding the efficiency determinants, in terms of ownership, we find that private facilities are less efficient than the public units. Furthermore, the size of the nursing home has a positive effect, and this reinforces our finding that Irish homes produce at increasing returns to scale. Also, notably, we find that a tendency towards quality improvements can lead to poorer technical efficiency performance.

  17. Ku-band high efficiency GaAs MMIC power amplifiers

    NASA Technical Reports Server (NTRS)

    Tserng, H. Q.; Witkowski, L. C.; Wurtele, M.; Saunier, Paul

    1988-01-01

    The development of Ku-band high efficiency GaAs MMIC power amplifiers is examined. Three amplifier modules operating over the 13 to 15 GHz frequency range are to be developed. The first MMIC is a 1 W variable power amplifier (VPA) with 35 percent efficiency. On-chip digital gain control is to be provided. The second MMIC is a medium power amplifier (MPA) with an output power goal of 1 W and 40 percent power-added efficiency. The third MMIC is a high power amplifier (HPA) with 4 W output power goal and 40 percent power-added efficiency. An output power of 0.36 W/mm with 49 percent efficiency was obtained on an ion implanted single gate MESFET at 15 GHz. On a dual gate MESFET, an output power of 0.42 W/mm with 27 percent efficiency was obtained. A mask set was designed that includes single stage, two stage, and three stage single gate amplifiers. A single stage 600 micron amplifier produced 0.4 W/mm output power with 40 percent efficiency at 14 GHz. A four stage dual gate amplifier generated 500 mW of output power with 20 dB gain at 17 GHz. A four-bit digital-to-analog converter was designed and fabricated which has an output swing of -3 V to +/- 1 V.

  18. Performance of Single-Stage Turbine of Mark 25 Torpedo Power Plant with Two Special Nozzles. III; Efficiency with Standard Rotor Blades

    NASA Technical Reports Server (NTRS)

    Schum, Harold J.; Whitney, Warren J.

    1949-01-01

    A Mark 25 torpedo power plant modified to operate as a single-stage turbine was investigated to determine the performance with two nozzle designs and a standard first-stage rotor having 0.40-inch blades with a 17O met-air angle. Both nozzles had smaller port cross-sectional areas than those nozzles of similar design, which were previously investigated. The performance of the two nozzles was compared on the basis of blade, rotor, and brake efficiencies as a function of blade-jet speed ratio for pressure ratios of 8, 15 (design), and 20. At pressure ratios of 15 and 20, the blade efficiency obtained with the nozzle having circular passages (K) was higher than that obtained with the nozzle having rectangular passages (J). At a pressure ratio of 8, the efficiencies obtained with the two nozzles were comparable for blade-jet speed ratios of less than 0.260. For blade-jet speed ratios exceeding this value, nozzle K yielded slightly higher efficiencies. The maximum blade efficiency of 0.569 was obtained with nozzle K at a pressure ratio of 8 and a blade-jet speed ratio of 0.295. At design speed and pressure ratio, nozzle K yielded a maximum blade efficiency of 0.534, an increase of 0.031 over that obtained with nozzle J. When the blade efficiencies of the two nozzles were compared with those of four other nozzles previously investigated, the maximum difference for the six nozzles with this rotor was 0.050. From, this comparison, no specific effect of nozzles size or shape on over-all performance was discernible.

  19. Maximizing the performance of a multiple-stage variable-throat venturi scrubber for particle collection

    NASA Astrophysics Data System (ADS)

    Muir, D. M.; Akeredolu, F.

    The high collection efficiencies that are required nowadays to meet the stricter pollution control standards necessitate the use of high-energy scrubbers, such as the venturi scrubber, for the arrestment of fine particulate matter from exhaust gas streams. To achieve more energy-efficient particle collection, several venturi stages may be used in series. This paper is principally a theoretical investigation of the performance of a multiple-stage venturi scrubber, the main objective of the study being to establish the best venturi design configuration for any given set of operating conditions. A mathematical model is used to predict collection efficiency vs pressure drop relationships for particle sizes in the range 0.2-5.0 μm for one-, two-, three- and four-stage scrubbers. The theoretical predictions are borne out qualitatively by experimental work. The paper shows that the three-stage venturi produces the highest collection efficiencies over the normal operating range except for the collection of very fine particles at low pressure drops, when the single-stage venturi is best. The significant improvement in performance achieved by the three-stage venturi when compared with conventional single-stage operation increases as both the particle size and system pressure drop increase.

  20. Performance evaluation of an anaerobic/aerobic landfill-based digester using yard waste for energy and compost production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yazdani, Ramin, E-mail: ryazdani@sbcglobal.net; Civil and Environmental Engineering, University of California, One Shields Avenue, Ghausi Hall, Davis, CA 95616; Barlaz, Morton A., E-mail: barlaz@eos.ncsu.edu

    2012-05-15

    Highlights: Black-Right-Pointing-Pointer Biochemical methane potential decreased by 83% during the two-stage operation. Black-Right-Pointing-Pointer Net energy produced was 84.3 MWh or 46 kWh per million metric tons (Mg). Black-Right-Pointing-Pointer The average removal efficiency of volatile organic compounds (VOCs) was 96-99%. Black-Right-Pointing-Pointer The average removal efficiency of non-methane organic compounds (NMOCs) was 68-99%. Black-Right-Pointing-Pointer The two-stage batch digester proved to be simple to operate and cost-effective. - Abstract: The objective of this study was to evaluate a new alternative for yard waste management by constructing, operating and monitoring a landfill-based two-stage batch digester (anaerobic/aerobic) with the recovery of energy and compost. Themore » system was initially operated under anaerobic conditions for 366 days, after which the yard waste was aerated for an additional 191 days. Off gas generated from the aerobic stage was treated by biofilters. Net energy recovery was 84.3 MWh, or 46 kWh per million metric tons of wet waste (as received), and the biochemical methane potential of the treated waste decreased by 83% during the two-stage operation. The average removal efficiencies of volatile organic compounds and non-methane organic compounds in the biofilters were 96-99% and 68-99%, respectively.« less

  1. The Concert system - Compiler and runtime technology for efficient concurrent object-oriented programming

    NASA Technical Reports Server (NTRS)

    Chien, Andrew A.; Karamcheti, Vijay; Plevyak, John; Sahrawat, Deepak

    1993-01-01

    Concurrent object-oriented languages, particularly fine-grained approaches, reduce the difficulty of large scale concurrent programming by providing modularity through encapsulation while exposing large degrees of concurrency. Despite these programmability advantages, such languages have historically suffered from poor efficiency. This paper describes the Concert project whose goal is to develop portable, efficient implementations of fine-grained concurrent object-oriented languages. Our approach incorporates aggressive program analysis and program transformation with careful information management at every stage from the compiler to the runtime system. The paper discusses the basic elements of the Concert approach along with a description of the potential payoffs. Initial performance results and specific plans for system development are also detailed.

  2. Protein structure estimation from NMR data by matrix completion.

    PubMed

    Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing

    2017-09-01

    Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.

  3. Single-stage experimental evaluation of tandem-airfoil rotor and stator blading for compressors. Part 2: Data and performance for stage A

    NASA Technical Reports Server (NTRS)

    Brent, J. A.

    1972-01-01

    Stage A, comprised of a conventional rotor and stator, was designed and tested to establish a performance baseline for comparison with the results of subsequent tests planned for two tandem-blade stages. The rotor had an inlet hub/tip ratio of 0.8 and a design tip velocity of 757 ft/sec. At design equivalent rotor speed, rotor A achieved a maximum adiabatic efficiency of 85.1 percent at a pressure ratio of 1.29. The stage maximum adiabatic efficiency was 78.6 percent at a pressure ratio of 1.27.

  4. Two-stage energy storage equalization system for lithium-ion battery pack

    NASA Astrophysics Data System (ADS)

    Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.

    2017-11-01

    How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.

  5. Doped-channel heterojunction structures for millimeter-wave discrete devices and MMICs

    NASA Technical Reports Server (NTRS)

    Saunier, P.; Kao, Y. C.; Khatibzadeh, A. M.; Tserng, H. Q.; Bradshaw, K.

    1989-01-01

    AlGaAs/InGaAs/GaAs-type heterostructures with one or two channels have been used to fabricate both discrete devices and monolithic amplifiers for millimeter-wave operation. The authors report that 0.25-micron x 50-micron discrete devices delivered a power density of 1 W/mm with 2.9-dB gain and 25 percent efficiency at 60 GHz. A 100-micron monolithic single-stage amplifier demonstrated a record 40 percent efficiency at 32 GHz, and a two-stage monolithic amplifier achieved a record 31.3 percent efficiency with 72-mW power and 13-dB gain at 32 GHz.

  6. Fueling of magnetically confined plasmas by single- and two-stage repeating pneumatic pellet injectors

    NASA Astrophysics Data System (ADS)

    Gouge, M. J.; Combs, S. K.; Foust, C. R.; Milora, S. L.

    Advanced plasma fueling systems for magnetic fusion confinement experiments are under development at Oak Ridge National Laboratory (ORNL). The general approach is that of producing and accelerating frozen hydrogenic pellets to speeds in the kilometer-per-second range using single shot and repetitive pneumatic (light-gas gun) pellet injectors. The millimeter-to-centimeter size pellets enter the plasma and continuously ablate because of the plasma electron heat flux, depositing fuel atoms along the pellet trajectory. This fueling method allows direct fueling in the interior of the hot plasma and is more efficient than the alternative method of injecting room temperature fuel gas at the wall of the plasma vacuum chamber. Single-stage pneumatic injectors based on the light-gas gun concept have provided hydrogenic fuel pellets in the speed range of 1 to 2 km/s in single-shot injector designs. Repetition rates up to 5 Hz have been demonstrated in repetitive injector designs. Future fusion reactor-scale devices may need higher pellet velocities because of the larger plasma size and higher plasma temperatures. Repetitive two-stage pneumatic injectors are under development at ORNL to provide long-pulse plasma fueling in the 3 to 5 km/s speed range. Recently, a repeating, two-stage light-gas gun achieved repetitive operation at 1 Hz with speeds in the range of 2 to 3 km/s.

  7. Technical efficiency of nursing homes: do five-star quality ratings matter?

    PubMed

    Dulal, Rajendra

    2017-02-28

    This study investigates associations between five-star quality ratings and technical efficiency of nursing homes. The sample consists of a balanced panel of 338 nursing homes in California from 2009 through 2013 and uses two-stage data envelopment (DEA) analysis. The first-stage applies an input oriented variable returns to scale DEA analysis. The second-stage uses a left censored random-effect Tobit regression model. The five-star quality ratings i.e., health inspections, quality measures, staffing available on the Nursing Home Compare website are divided into two categories: outcome and structure form of quality. Results show that quality measures ratings and health inspection ratings, used as outcome form of quality, are not associated with mean technical efficiency. These quality ratings, however, do affect the technical efficiency of a particular nursing home and hence alter the ranking of nursing homes based on efficiency scores. Staffing rating, categorized as a structural form of quality, is negatively associated with mean technical efficiency. These findings show that quality dimensions are associated with technical efficiency in different ways, suggesting that multiple dimensions of quality should be included in the efficiency analysis of nursing homes. They also suggest that patient care can be enhanced through investing more in improving care delivery rather than simply raising the number of staff per resident.

  8. Robust network data envelopment analysis approach to evaluate the efficiency of regional electricity power networks under uncertainty.

    PubMed

    Fathollah Bayati, Mohsen; Sadjadi, Seyed Jafar

    2017-01-01

    In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model.

  9. Robust network data envelopment analysis approach to evaluate the efficiency of regional electricity power networks under uncertainty

    PubMed Central

    Sadjadi, Seyed Jafar

    2017-01-01

    In this paper, new Network Data Envelopment Analysis (NDEA) models are developed to evaluate the efficiency of regional electricity power networks. The primary objective of this paper is to consider perturbation in data and develop new NDEA models based on the adaptation of robust optimization methodology. Furthermore, in this paper, the efficiency of the entire networks of electricity power, involving generation, transmission and distribution stages is measured. While DEA has been widely used to evaluate the efficiency of the components of electricity power networks during the past two decades, there is no study to evaluate the efficiency of the electricity power networks as a whole. The proposed models are applied to evaluate the efficiency of 16 regional electricity power networks in Iran and the effect of data uncertainty is also investigated. The results are compared with the traditional network DEA and parametric SFA methods. Validity and verification of the proposed models are also investigated. The preliminary results indicate that the proposed models were more reliable than the traditional Network DEA model. PMID:28953900

  10. Energy efficient engine high pressure turbine test hardware detailed design report

    NASA Technical Reports Server (NTRS)

    Halila, E. E.; Lenahan, D. T.; Thomas, T. T.

    1982-01-01

    The high pressure turbine configuration for the Energy Efficient Engine is built around a two-stage design system. Moderate aerodynamic loading for both stages is used to achieve the high level of turbine efficiency. Flowpath components are designed for 18,000 hours of life, while the static and rotating structures are designed for 36,000 hours of engine operation. Both stages of turbine blades and vanes are air-cooled incorporating advanced state of the art in cooling technology. Direct solidification (DS) alloys are used for blades and one stage of vanes, and an oxide dispersion system (ODS) alloy is used for the Stage 1 nozzle airfoils. Ceramic shrouds are used as the material composition for the Stage 1 shroud. An active clearance control (ACC) system is used to control the blade tip to shroud clearances for both stages. Fan air is used to impinge on the shroud casing support rings, thereby controlling the growth rate of the shroud. This procedure allows close clearance control while minimizing blade tip to shroud rubs.

  11. High-power noise-like pulse generation using a 1.56-µm all-fiber laser system.

    PubMed

    Lin, Shih-Shian; Hwang, Sheng-Kwang; Liu, Jia-Ming

    2015-07-13

    We demonstrated an all-fiber, high-power noise-like pulse laser system at the 1.56-µm wavelength. A low-power noise-like pulse train generated by a ring oscillator was amplified using a two-stage amplifier, where the performance of the second-stage amplifier determined the final output power level. The optical intensity in the second-stage amplifier was managed well to avoid not only the excessive spectral broadening induced by nonlinearities but also any damage to the device. On the other hand, the power conversion efficiency of the amplifier was optimized through proper control of its pump wavelength. The pump wavelength determines the pump absorption and therefore the power conversion efficiency of the gain fiber. Through this approach, the average power of the noise-like pulse train was amplified considerably to an output of 13.1 W, resulting in a power conversion efficiency of 36.1% and a pulse energy of 0.85 µJ. To the best of our knowledge, these amplified pulses have the highest average power and pulse energy for noise-like pulses in the 1.56-µm wavelength region. As a result, the net gain in the cascaded amplifier reached 30 dB. With peak and pedestal widths of 168 fs and 61.3 ps, respectively, for the amplified pulses, the pedestal-to-peak intensity ratio of the autocorrelation trace remains at the value of 0.5 required for truly noise-like pulses.

  12. Reducing Transaction Costs for Energy Efficiency Investments and Analysis of Economic Risk Associated With Building Performance Uncertainties: Small Buildings and Small Portfolios Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Langner, R.; Hendron, B.; Bonnema, E.

    2014-08-01

    The small buildings and small portfolios (SBSP) sector face a number of barriers that inhibit SBSP owners from adopting energy efficiency solutions. This pilot project focused on overcoming two of the largest barriers to financing energy efficiency in small buildings: disproportionately high transaction costs and unknown or unacceptable risk. Solutions to these barriers can often be at odds, because inexpensive turnkey solutions are often not sufficiently tailored to the unique circumstances of each building, reducing confidence that the expected energy savings will be achieved. To address these barriers, NREL worked with two innovative, forward-thinking lead partners, Michigan Saves and Energi,more » to develop technical solutions that provide a quick and easy process to encourage energy efficiency investments while managing risk. The pilot project was broken into two stages: the first stage focused on reducing transaction costs, and the second stage focused on reducing performance risk. In the first stage, NREL worked with the non-profit organization, Michigan Saves, to analyze the effects of 8 energy efficiency measures (EEMs) on 81 different baseline small office building models in Holland, Michigan (climate zone 5A). The results of this analysis (totaling over 30,000 cases) are summarized in a simple spreadsheet tool that enables users to easily sort through the results and find appropriate small office EEM packages that meet a particular energy savings threshold and are likely to be cost-effective.« less

  13. Blind source separation for ambulatory sleep recording

    PubMed Central

    Porée, Fabienne; Kachenoura, Amar; Gauvrit, Hervé; Morvan, Catherine; Carrault, Guy; Senhadji, Lotfi

    2006-01-01

    This paper deals with the conception of a new system for sleep staging in ambulatory conditions. Sleep recording is performed by means of five electrodes: two temporal, two frontal and a reference. This configuration enables to avoid the chin area to enhance the quality of the muscular signal and the hair region for patient convenience. The EEG, EMG and EOG signals are separated using the Independent Component Analysis approach. The system is compared to a standard sleep analysis system using polysomnographic recordings of 14 patients. The overall concordance of 67.2% is achieved between the two systems. Based on the validation results and the computational efficiency we recommend the clinical use of the proposed system in a commercial sleep analysis platform. PMID:16617618

  14. Paramagnetic capture mode magnetophoretic microseparator for high efficiency blood cell separations.

    PubMed

    Han, Ki-Ho; Frazier, A Bruno

    2006-02-01

    This paper presents the characterization of continuous single-stage and three-stage cascade paramagnetic capture (PMC) mode magnetophoretic microseparators for high efficiency separation of red and white blood cells from diluted whole blood based on their native magnetic properties. The separation mechanism for both PMC microseparators is based on a high gradient magnetic separation (HGMS) method. This approach enables separation of blood cells without the use of additives such as magnetic beads. Experimental results for the single-stage PMC microseparator show that 91.1% of red blood cells were continuously separated from the sample at a volumetric flow rate of 5 microl h-1. In addition, the three-stage cascade PMC microseparator continuously separated 93.5% of red blood cells and 97.4% of white blood cells from whole blood at a volumetric flow rate of 5 microl h-1.

  15. Cycle analysis of planar SOFC power generation with serial connection of low and high temperature SOFCs

    NASA Astrophysics Data System (ADS)

    Araki, Takuto; Ohba, Takahiro; Takezawa, Shinya; Onda, Kazuo; Sakaki, Yoshinori

    Solid oxide fuel cells (SOFCs) can be composed of solid components for stable operation, and high power generation efficiency is obtained by using high temperature exhaust heat for fuel reforming and bottoming power generation by a gas turbine. Recently, low-temperature SOFCs, which run in the temperature range of around 600 °C or above and give high power generation efficiency, have been developed. On the other hand, a power generation system with multi-staged fuel cells has been proposed by the United States DOE to obtain high efficiency. In our present study, a power generation system consisting of two-staged SOFCs with serial connection of low and high temperature SOFCs was investigated. Overpotential data for the low-temperature SOFC used in this study are based on recently published data, while data for high-temperature SOFC are based on our previous study. The numerical results show that the power generation efficiency of the two-staged SOFCs is 50.3% and the total efficiency of power generation with gas turbine is 56.1% under standard operating conditions. These efficiencies are a little higher than those by high-temperature SOFC only.

  16. A WENO-Limited, ADER-DT, Finite-Volume Scheme for Efficient, Robust, and Communication-Avoiding Multi-Dimensional Transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norman, Matthew R

    2014-01-01

    The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronizationmore » and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.« less

  17. Two-stage phase II oncology designs using short-term endpoints for early stopping.

    PubMed

    Kunz, Cornelia U; Wason, James Ms; Kieser, Meinhard

    2017-08-01

    Phase II oncology trials are conducted to evaluate whether the tumour activity of a new treatment is promising enough to warrant further investigation. The most commonly used approach in this context is a two-stage single-arm design with binary endpoint. As for all designs with interim analysis, its efficiency strongly depends on the relation between recruitment rate and follow-up time required to measure the patients' outcomes. Usually, recruitment is postponed after the sample size of the first stage is achieved up until the outcomes of all patients are available. This may lead to a considerable increase of the trial length and with it to a delay in the drug development process. We propose a design where an intermediate endpoint is used in the interim analysis to decide whether or not the study is continued with a second stage. Optimal and minimax versions of this design are derived. The characteristics of the proposed design in terms of type I error rate, power, maximum and expected sample size as well as trial duration are investigated. Guidance is given on how to select the most appropriate design. Application is illustrated by a phase II oncology trial in patients with advanced angiosarcoma, which motivated this research.

  18. Deep-cascade: Cascading 3D Deep Neural Networks for Fast Anomaly Detection and Localization in Crowded Scenes.

    PubMed

    Sabokrou, Mohammad; Fayyaz, Mohsen; Fathy, Mahmood; Klette, Reinhard

    2017-02-17

    This paper proposes a fast and reliable method for anomaly detection and localization in video data showing crowded scenes. Time-efficient anomaly localization is an ongoing challenge and subject of this paper. We propose a cubicpatch- based method, characterised by a cascade of classifiers, which makes use of an advanced feature-learning approach. Our cascade of classifiers has two main stages. First, a light but deep 3D auto-encoder is used for early identification of "many" normal cubic patches. This deep network operates on small cubic patches as being the first stage, before carefully resizing remaining candidates of interest, and evaluating those at the second stage using a more complex and deeper 3D convolutional neural network (CNN). We divide the deep autoencoder and the CNN into multiple sub-stages which operate as cascaded classifiers. Shallow layers of the cascaded deep networks (designed as Gaussian classifiers, acting as weak single-class classifiers) detect "simple" normal patches such as background patches, and more complex normal patches are detected at deeper layers. It is shown that the proposed novel technique (a cascade of two cascaded classifiers) performs comparable to current top-performing detection and localization methods on standard benchmarks, but outperforms those in general with respect to required computation time.

  19. A Two-Stage Multi-Agent Based Assessment Approach to Enhance Students' Learning Motivation through Negotiated Skills Assessment

    ERIC Educational Resources Information Center

    Chadli, Abdelhafid; Bendella, Fatima; Tranvouez, Erwan

    2015-01-01

    In this paper we present an Agent-based evaluation approach in a context of Multi-agent simulation learning systems. Our evaluation model is based on a two stage assessment approach: (1) a Distributed skill evaluation combining agents and fuzzy sets theory; and (2) a Negotiation based evaluation of students' performance during a training…

  20. Preliminary Axial Flow Turbine Design and Off-Design Performance Analysis Methods for Rotary Wing Aircraft Engines. Part 1; Validation

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng, S.

    2009-01-01

    For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.

  1. Differentiating the persistency and permanency of some two stages DNA splicing language via Yusof-Goode (Y-G) approach

    NASA Astrophysics Data System (ADS)

    Mudaber, M. H.; Yusof, Y.; Mohamad, M. S.

    2017-09-01

    Predicting the existence of restriction enzymes sequences on the recombinant DNA fragments, after accomplishing the manipulating reaction, via mathematical approach is considered as a convenient way in terms of DNA recombination. In terms of mathematics, for this characteristic of the recombinant DNA strands, which involve the recognition sites of restriction enzymes, is called persistent and permanent. Normally differentiating the persistency and permanency of two stages recombinant DNA strands using wet-lab experiment is expensive and time-consuming due to running the experiment at two stages as well as adding more restriction enzymes on the reaction. Therefore, in this research, by using Yusof-Goode (Y-G) model the difference between persistent and permanent splicing language of some two stages is investigated. Two theorems were provided, which show the persistency and non-permanency of two stages DNA splicing language.

  2. Progressive Sampling Technique for Efficient and Robust Uncertainty and Sensitivity Analysis of Environmental Systems Models: Stability and Convergence

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, R.; Hosseini, N.; Razavi, S.

    2016-12-01

    Modern earth and environmental models are usually characterized by a large parameter space and high computational cost. These two features prevent effective implementation of sampling-based analysis such as sensitivity and uncertainty analysis, which require running these computationally expensive models several times to adequately explore the parameter/problem space. Therefore, developing efficient sampling techniques that scale with the size of the problem, computational budget, and users' needs is essential. In this presentation, we propose an efficient sequential sampling strategy, called Progressive Latin Hypercube Sampling (PLHS), which provides an increasingly improved coverage of the parameter space, while satisfying pre-defined requirements. The original Latin hypercube sampling (LHS) approach generates the entire sample set in one stage; on the contrary, PLHS generates a series of smaller sub-sets (also called `slices') while: (1) each sub-set is Latin hypercube and achieves maximum stratification in any one dimensional projection; (2) the progressive addition of sub-sets remains Latin hypercube; and thus (3) the entire sample set is Latin hypercube. Therefore, it has the capability to preserve the intended sampling properties throughout the sampling procedure. PLHS is deemed advantageous over the existing methods, particularly because it nearly avoids over- or under-sampling. Through different case studies, we show that PHLS has multiple advantages over the one-stage sampling approaches, including improved convergence and stability of the analysis results with fewer model runs. In addition, PLHS can help to minimize the total simulation time by only running the simulations necessary to achieve the desired level of quality (e.g., accuracy, and convergence rate).

  3. Development, current applications and future roles of biorelevant two-stage in vitro testing in drug development.

    PubMed

    Fiolka, Tom; Dressman, Jennifer

    2018-03-01

    Various types of two stage in vitro testing have been used in a number of experimental settings. In addition to its application in quality control and for regulatory purposes, two-stage in vitro testing has also been shown to be a valuable technique to evaluate the supersaturation and precipitation behavior of poorly soluble drugs during drug development. The so-called 'transfer model', which is an example of two-stage testing, has provided valuable information about the in vivo performance of poorly soluble, weakly basic drugs by simulating the gastrointestinal drug transit from the stomach into the small intestine with a peristaltic pump. The evolution of the transfer model has resulted in various modifications of the experimental model set-up. Concomitantly, various research groups have developed simplified approaches to two-stage testing to investigate the supersaturation and precipitation behavior of weakly basic drugs without the necessity of using a transfer pump. Given the diversity among the various two-stage test methods available today, a more harmonized approach needs to be taken to optimize the use of two stage testing at different stages of drug development. © 2018 Royal Pharmaceutical Society.

  4. Numerical Study of a 10 K Two Stage Pulse Tube Cryocooler with Precooling Inside the Pulse Tube

    NASA Astrophysics Data System (ADS)

    Xiaomin, Pang; Xiaotao, Wang; Wei, Dai; Jianyin, Hu; Ercang, Luo

    2017-02-01

    High efficiency cryocoolers working below 10 K have many applications such as cryo-pump, superconductor cooling and cryogenic electronics. This paper presents a thermally coupled two-stage pulse tube cryocooler system and its numeric analysis. The simulation results indicate that temperature distribution in the pulse tube has a significant impact on the system performance. So a precooling heat exchanger is put inside the second stage pulse tube for a deep investigation on its influence on the system performance. The influences of operating parameters such as precooling temperature, location of the precooling heat exchanger are discussed. Comparison of energy losses apparently show the advantages of the configuration which leads to an improvement on the efficiency. Finally, the cryocooler is predicted to be able to reach a relative Carnot efficiency of 10.7% at 10 K temperature.

  5. A hybrid framework for quantifying the influence of data in hydrological model calibration

    NASA Astrophysics Data System (ADS)

    Wright, David P.; Thyer, Mark; Westra, Seth; McInerney, David

    2018-06-01

    Influence diagnostics aim to identify a small number of influential data points that have a disproportionate impact on the model parameters and/or predictions. The key issues with current influence diagnostic techniques are that the regression-theory approaches do not provide hydrologically relevant influence metrics, while the case-deletion approaches are computationally expensive to calculate. The main objective of this study is to introduce a new two-stage hybrid framework that overcomes these challenges, by delivering hydrologically relevant influence metrics in a computationally efficient manner. Stage one uses computationally efficient regression-theory influence diagnostics to identify the most influential points based on Cook's distance. Stage two then uses case-deletion influence diagnostics to quantify the influence of points using hydrologically relevant metrics. To illustrate the application of the hybrid framework, we conducted three experiments on 11 hydro-climatologically diverse Australian catchments using the GR4J hydrological model. The first experiment investigated how many data points from stage one need to be retained in order to reliably identify those points that have the hightest influence on hydrologically relevant metrics. We found that a choice of 30-50 is suitable for hydrological applications similar to those explored in this study (30 points identified the most influential data 98% of the time and reduced the required recalibrations by 99% for a 10 year calibration period). The second experiment found little evidence of a change in the magnitude of influence with increasing calibration period length from 1, 2, 5 to 10 years. Even for 10 years the impact of influential points can still be high (>30% influence on maximum predicted flows). The third experiment compared the standard least squares (SLS) objective function with the weighted least squares (WLS) objective function on a 10 year calibration period. In two out of three flow metrics there was evidence that SLS, with the assumption of homoscedastic residual error, identified data points with higher influence (largest changes of 40%, 10%, and 44% for the maximum, mean, and low flows, respectively) than WLS, with the assumption of heteroscedastic residual errors (largest changes of 26%, 6%, and 6% for the maximum, mean, and low flows, respectively). The hybrid framework complements existing model diagnostic tools and can be applied to a wide range of hydrological modelling scenarios.

  6. Two-statge sorption type cryogenic refrigerator including heat regeneration system

    NASA Technical Reports Server (NTRS)

    Jones, Jack A. (Inventor); Wen, Liang-Chi (Inventor); Bard, Steven (Inventor)

    1989-01-01

    A lower stage chemisorption refrigeration system physically and functionally coupled to an upper stage physical adsorption refrigeration system. Waste heat generated by the lower stage cycle is regenerated to fuel the upper stage cycle thereby greatly improving the energy efficiency of a two-stage sorption refrigerator. The two stages are joined by disposing a first pressurization chamber providing a high pressure flow of a first refrigerant for the lower stage refrigeration cycle within a second pressurization chamber providing a high pressure flow of a second refrigerant for the upper stage refrigeration cycle. The first pressurization chamber is separated from the second pressurization chamber by a gas-gap thermal switch which at times is filled with a thermoconductive fluid to allow conduction of heat from the first pressurization chamber to the second pressurization chamber.

  7. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    NASA Astrophysics Data System (ADS)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  8. Dynamic robustness of knowledge collaboration network of open source product development community

    NASA Astrophysics Data System (ADS)

    Zhou, Hong-Li; Zhang, Xiao-Dong

    2018-01-01

    As an emergent innovative design style, open source product development communities are characterized by a self-organizing, mass collaborative, networked structure. The robustness of the community is critical to its performance. Using the complex network modeling method, the knowledge collaboration network of the community is formulated, and the robustness of the network is systematically and dynamically studied. The characteristics of the network along the development period determine that its robustness should be studied from three time stages: the start-up, development and mature stages of the network. Five kinds of user-loss pattern are designed, to assess the network's robustness under different situations in each of these three time stages. Two indexes - the largest connected component and the network efficiency - are used to evaluate the robustness of the community. The proposed approach is applied in an existing open source car design community. The results indicate that the knowledge collaboration networks show different levels of robustness in different stages and different user loss patterns. Such analysis can be applied to provide protection strategies for the key users involved in knowledge dissemination and knowledge contribution at different stages of the network, thereby promoting the sustainable and stable development of the open source community.

  9. Design optimization of ultra-high concentrator photovoltaic system using two-stage non-imaging solar concentrator

    NASA Astrophysics Data System (ADS)

    Wong, C.-W.; Yew, T.-K.; Chong, K.-K.; Tan, W.-C.; Tan, M.-H.; Lim, B.-H.

    2017-11-01

    This paper presents a systematic approach for optimizing the design of ultra-high concentrator photovoltaic (UHCPV) system comprised of non-imaging dish concentrator (primary optical element) and crossed compound parabolic concentrator (secondary optical element). The optimization process includes the design of primary and secondary optics by considering the focal distance, spillage losses and rim angle of the dish concentrator. The imperfection factors, i.e. mirror reflectivity of 93%, lens’ optical efficiency of 85%, circumsolar ratio of 0.2 and mirror surface slope error of 2 mrad, were considered in the simulation to avoid the overestimation of output power. The proposed UHCPV system is capable of attaining effective ultra-high solar concentration ratio of 1475 suns and DC system efficiency of 31.8%.

  10. Cascade photonic integrated circuit architecture for electro-optic in-phase quadrature/single sideband modulation or frequency conversion.

    PubMed

    Hasan, Mehedi; Hall, Trevor

    2015-11-01

    A photonic integrated circuit architecture for implementing frequency upconversion is proposed. The circuit consists of a 1×2 splitter and 2×1 combiner interconnected by two stages of differentially driven phase modulators having 2×2 multimode interference coupler between the stages. A transfer matrix approach is used to model the operation of the architecture. The predictions of the model are validated by simulations performed using an industry standard software tool. The intrinsic conversion efficiency of the proposed design is improved by 6 dB over the alternative functionally equivalent circuit based on dual parallel Mach-Zehnder modulators known in the prior art. A two-tone analysis is presented to study the linearity of the proposed circuit, and a comparison is provided over the alternative. The proposed circuit is suitable for integration in any platform that offers linear electro-optic phase modulation such as LiNbO(3), silicon, III-V, or hybrid technology.

  11. A hybrid optimization approach in non-isothermal glass molding

    NASA Astrophysics Data System (ADS)

    Vu, Anh-Tuan; Kreilkamp, Holger; Krishnamoorthi, Bharathwaj Janaki; Dambon, Olaf; Klocke, Fritz

    2016-10-01

    Intensively growing demands on complex yet low-cost precision glass optics from the today's photonic market motivate the development of an efficient and economically viable manufacturing technology for complex shaped optics. Against the state-of-the-art replication-based methods, Non-isothermal Glass Molding turns out to be a promising innovative technology for cost-efficient manufacturing because of increased mold lifetime, less energy consumption and high throughput from a fast process chain. However, the selection of parameters for the molding process usually requires a huge effort to satisfy precious requirements of the molded optics and to avoid negative effects on the expensive tool molds. Therefore, to reduce experimental work at the beginning, a coupling CFD/FEM numerical modeling was developed to study the molding process. This research focuses on the development of a hybrid optimization approach in Non-isothermal glass molding. To this end, an optimal configuration with two optimization stages for multiple quality characteristics of the glass optics is addressed. The hybrid Back-Propagation Neural Network (BPNN)-Genetic Algorithm (GA) is first carried out to realize the optimal process parameters and the stability of the process. The second stage continues with the optimization of glass preform using those optimal parameters to guarantee the accuracy of the molded optics. Experiments are performed to evaluate the effectiveness and feasibility of the model for the process development in Non-isothermal glass molding.

  12. Statistical Analysis of Individual Participant Data Meta-Analyses: A Comparison of Methods and Recommendations for Practice

    PubMed Central

    Stewart, Gavin B.; Altman, Douglas G.; Askie, Lisa M.; Duley, Lelia; Simmonds, Mark C.; Stewart, Lesley A.

    2012-01-01

    Background Individual participant data (IPD) meta-analyses that obtain “raw” data from studies rather than summary data typically adopt a “two-stage” approach to analysis whereby IPD within trials generate summary measures, which are combined using standard meta-analytical methods. Recently, a range of “one-stage” approaches which combine all individual participant data in a single meta-analysis have been suggested as providing a more powerful and flexible approach. However, they are more complex to implement and require statistical support. This study uses a dataset to compare “two-stage” and “one-stage” models of varying complexity, to ascertain whether results obtained from the approaches differ in a clinically meaningful way. Methods and Findings We included data from 24 randomised controlled trials, evaluating antiplatelet agents, for the prevention of pre-eclampsia in pregnancy. We performed two-stage and one-stage IPD meta-analyses to estimate overall treatment effect and to explore potential treatment interactions whereby particular types of women and their babies might benefit differentially from receiving antiplatelets. Two-stage and one-stage approaches gave similar results, showing a benefit of using anti-platelets (Relative risk 0.90, 95% CI 0.84 to 0.97). Neither approach suggested that any particular type of women benefited more or less from antiplatelets. There were no material differences in results between different types of one-stage model. Conclusions For these data, two-stage and one-stage approaches to analysis produce similar results. Although one-stage models offer a flexible environment for exploring model structure and are useful where across study patterns relating to types of participant, intervention and outcome mask similar relationships within trials, the additional insights provided by their usage may not outweigh the costs of statistical support for routine application in syntheses of randomised controlled trials. Researchers considering undertaking an IPD meta-analysis should not necessarily be deterred by a perceived need for sophisticated statistical methods when combining information from large randomised trials. PMID:23056232

  13. Knowledge-Guided Docking of WW Domain Proteins and Flexible Ligands

    NASA Astrophysics Data System (ADS)

    Lu, Haiyun; Li, Hao; Banu Bte Sm Rashid, Shamima; Leow, Wee Kheng; Liou, Yih-Cherng

    Studies of interactions between protein domains and ligands are important in many aspects such as cellular signaling. We present a knowledge-guided approach for docking protein domains and flexible ligands. The approach is applied to the WW domain, a small protein module mediating signaling complexes which have been implicated in diseases such as muscular dystrophy and Liddle’s syndrome. The first stage of the approach employs a substring search for two binding grooves of WW domains and possible binding motifs of peptide ligands based on known features. The second stage aligns the ligand’s peptide backbone to the two binding grooves using a quasi-Newton constrained optimization algorithm. The backbone-aligned ligands produced serve as good starting points to the third stage which uses any flexible docking algorithm to perform the docking. The experimental results demonstrate that the backbone alignment method in the second stage performs better than conventional rigid superposition given two binding constraints. It is also shown that using the backbone-aligned ligands as initial configurations improves the flexible docking in the third stage. The presented approach can also be applied to other protein domains that involve binding of flexible ligand to two or more binding sites.

  14. Two stage sorption type cryogenic refrigerator including heat regeneration system

    NASA Technical Reports Server (NTRS)

    Jones, Jack A. (Inventor); Wen, Liang-Chi (Inventor); Bard, Steven (Inventor)

    1989-01-01

    A lower stage chemisorption refrigeration system physically and functionally coupled to an upper stage physical adsorption refrigeration system is disclosed. Waste heat generated by the lower stage cycle is regenerated to fuel the upper stage cycle thereby greatly improving the energy efficiency of a two-stage sorption refrigerator. The two stages are joined by disposing a first pressurization chamber providing a high pressure flow of a first refrigerant for the lower stage refrigeration cycle within a second pressurization chamber providing a high pressure flow of a second refrigerant for the upper stage refrigeration cycle. The first pressurization chamber is separated from the second pressurization chamber by a gas-gap thermal switch which at times is filled with a thermoconductive fluid to allow conduction of heat from the first pressurization chamber to the second pressurization chamber.

  15. Operationally efficient propulsion system study (OEPSS) data book. Volume 10; Air Augmented Rocket Afterburning

    NASA Technical Reports Server (NTRS)

    Farhangi, Shahram; Trent, Donnie (Editor)

    1992-01-01

    A study was directed towards assessing viability and effectiveness of an air augmented ejector/rocket. Successful thrust augmentation could potentially reduce a multi-stage vehicle to a single stage-to-orbit vehicle (SSTO) and, thereby, eliminate the associated ground support facility infrastructure and ground processing required by the eliminated stage. The results of this preliminary study indicate that an air augmented ejector/rocket propulsion system is viable. However, uncertainties resulting from simplified approach and assumptions must be resolved by further investigations.

  16. High GMS score hypospadias: Outcomes after one- and two-stage operations.

    PubMed

    Huang, Jonathan; Rayfield, Lael; Broecker, Bruce; Cerwinka, Wolfgang; Kirsch, Andrew; Scherz, Hal; Smith, Edwin; Elmore, James

    2017-06-01

    Established criteria to assist surgeons in deciding between a one- or two-stage operation for severe hypospadias are lacking. While anatomical features may preclude some surgical options, the decision to approach severe hypospadias in a one- or two-stage fashion is generally based on individual surgeon preference. This decision has been described as a dilemma as outcomes range widely and there is lack of evidence supporting the superiority of one approach over the other. The aim of this study is to determine whether the GMS hypospadias score may provide some guidance in choosing the surgical approach used for correction of severe hypospadias. GMS scores were preoperatively assigned to patients having primary surgery for hypospadias. Those patients having surgery for the most severe hypospadias were selected and formed the study cohort. The records of these patients were reviewed and pertinent data collected. Complications requiring further surgery were assessed and correlated with the GMS score and the surgical technique used for repair (one-stage vs. two-stage). Eighty-seven boys were identified with a GMS score (range 3-12) of 10 or higher. At a mean follow-up of 22 months the overall complication rate for the cohort after final planned surgery was 39%. For intended one-stage procedures (n = 48) an acceptable result was achieved with one surgery for 28 patients (58%), with two surgeries for 14 (29%), and with three to five surgeries for six (13%). For intended two-stage procedures (n = 39) an acceptable result was achieved with two surgeries for 26 patients (67%), three surgeries for eight (21%), and four surgeries for three (8%). Two other patients having two-stage surgery required seven surgeries to achieve an acceptable result. Complication rates are summarized in the Table. The complication rates for GMS 10 patients were similar (27% and 33%, p = 0.28) for one- and two-stage repairs, respectively. GMS 11 patients having a one-stage repair had a significantly higher complication rate (69%) than those having a two-stage repair (29%) (p = 0.04). GMS 12 patients had the highest complication rate with a one-stage repair (80%) compared with a complication rate of 37% when a two-stage repair was used (p = 0.12). Guidelines to help standardize the surgical approach to severe hypospadias are needed. Staged surgery for GMS 11 and 12 patients may result in a lower complication rate but may not reduce the number of surgeries required for an acceptable result. Although further study is needed, the GMS score may be helpful for establishing such criteria. Copyright © 2017 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  17. A Modified Penalty Parameter Approach for Optimal Estimation of UH with Simultaneous Estimation of Infiltration Parameters

    NASA Astrophysics Data System (ADS)

    Bhattacharjya, Rajib Kumar

    2018-05-01

    The unit hydrograph and the infiltration parameters of a watershed can be obtained from observed rainfall-runoff data by using inverse optimization technique. This is a two-stage optimization problem. In the first stage, the infiltration parameters are obtained and the unit hydrograph ordinates are estimated in the second stage. In order to combine this two-stage method into a single stage one, a modified penalty parameter approach is proposed for converting the constrained optimization problem to an unconstrained one. The proposed approach is designed in such a way that the model initially obtains the infiltration parameters and then searches the optimal unit hydrograph ordinates. The optimization model is solved using Genetic Algorithms. A reduction factor is used in the penalty parameter approach so that the obtained optimal infiltration parameters are not destroyed during subsequent generation of genetic algorithms, required for searching optimal unit hydrograph ordinates. The performance of the proposed methodology is evaluated by using two example problems. The evaluation shows that the model is superior, simple in concept and also has the potential for field application.

  18. On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics

    PubMed Central

    Calcagno, Cristina; Coppo, Mario

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed. PMID:25050327

  19. On designing multicore-aware simulators for systems biology endowed with OnLine statistics.

    PubMed

    Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.

  20. Intelligent Space Tube Optimization for speeding ground water remedial design.

    PubMed

    Kalwij, Ineke M; Peralta, Richard C

    2008-01-01

    An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.

  1. AA9int: SNP Interaction Pattern Search Using Non-Hierarchical Additive Model Set.

    PubMed

    Lin, Hui-Yi; Huang, Po-Yu; Chen, Dung-Tsa; Tung, Heng-Yuan; Sellers, Thomas A; Pow-Sang, Julio; Eeles, Rosalind; Easton, Doug; Kote-Jarai, Zsofia; Amin Al Olama, Ali; Benlloch, Sara; Muir, Kenneth; Giles, Graham G; Wiklund, Fredrik; Gronberg, Henrik; Haiman, Christopher A; Schleutker, Johanna; Nordestgaard, Børge G; Travis, Ruth C; Hamdy, Freddie; Neal, David E; Pashayan, Nora; Khaw, Kay-Tee; Stanford, Janet L; Blot, William J; Thibodeau, Stephen N; Maier, Christiane; Kibel, Adam S; Cybulski, Cezary; Cannon-Albright, Lisa; Brenner, Hermann; Kaneva, Radka; Batra, Jyotsna; Teixeira, Manuel R; Pandha, Hardev; Lu, Yong-Jie; Park, Jong Y

    2018-06-07

    The use of single nucleotide polymorphism (SNP) interactions to predict complex diseases is getting more attention during the past decade, but related statistical methods are still immature. We previously proposed the SNP Interaction Pattern Identifier (SIPI) approach to evaluate 45 SNP interaction patterns/patterns. SIPI is statistically powerful but suffers from a large computation burden. For large-scale studies, it is necessary to use a powerful and computation-efficient method. The objective of this study is to develop an evidence-based mini-version of SIPI as the screening tool or solitary use and to evaluate the impact of inheritance mode and model structure on detecting SNP-SNP interactions. We tested two candidate approaches: the 'Five-Full' and 'AA9int' method. The Five-Full approach is composed of the five full interaction models considering three inheritance modes (additive, dominant and recessive). The AA9int approach is composed of nine interaction models by considering non-hierarchical model structure and the additive mode. Our simulation results show that AA9int has similar statistical power compared to SIPI and is superior to the Five-Full approach, and the impact of the non-hierarchical model structure is greater than that of the inheritance mode in detecting SNP-SNP interactions. In summary, it is recommended that AA9int is a powerful tool to be used either alone or as the screening stage of a two-stage approach (AA9int+SIPI) for detecting SNP-SNP interactions in large-scale studies. The 'AA9int' and 'parAA9int' functions (standard and parallel computing version) are added in the SIPI R package, which is freely available at https://linhuiyi.github.io/LinHY_Software/. hlin1@lsuhsc.edu. Supplementary data are available at Bioinformatics online.

  2. Could the clinical interpretability of subgroups detected using clustering methods be improved by using a novel two-stage approach?

    PubMed

    Kent, Peter; Stochkendahl, Mette Jensen; Christensen, Henrik Wulff; Kongsted, Alice

    2015-01-01

    Recognition of homogeneous subgroups of patients can usefully improve prediction of their outcomes and the targeting of treatment. There are a number of research approaches that have been used to recognise homogeneity in such subgroups and to test their implications. One approach is to use statistical clustering techniques, such as Cluster Analysis or Latent Class Analysis, to detect latent relationships between patient characteristics. Influential patient characteristics can come from diverse domains of health, such as pain, activity limitation, physical impairment, social role participation, psychological factors, biomarkers and imaging. However, such 'whole person' research may result in data-driven subgroups that are complex, difficult to interpret and challenging to recognise clinically. This paper describes a novel approach to applying statistical clustering techniques that may improve the clinical interpretability of derived subgroups and reduce sample size requirements. This approach involves clustering in two sequential stages. The first stage involves clustering within health domains and therefore requires creating as many clustering models as there are health domains in the available data. This first stage produces scoring patterns within each domain. The second stage involves clustering using the scoring patterns from each health domain (from the first stage) to identify subgroups across all domains. We illustrate this using chest pain data from the baseline presentation of 580 patients. The new two-stage clustering resulted in two subgroups that approximated the classic textbook descriptions of musculoskeletal chest pain and atypical angina chest pain. The traditional single-stage clustering resulted in five clusters that were also clinically recognisable but displayed less distinct differences. In this paper, a new approach to using clustering techniques to identify clinically useful subgroups of patients is suggested. Research designs, statistical methods and outcome metrics suitable for performing that testing are also described. This approach has potential benefits but requires broad testing, in multiple patient samples, to determine its clinical value. The usefulness of the approach is likely to be context-specific, depending on the characteristics of the available data and the research question being asked of it.

  3. Conceptual designs of E × B multistage depressed collectors for gyrotrons

    NASA Astrophysics Data System (ADS)

    Wu, Chuanren; Pagonakis, Ioannis Gr.; Gantenbein, Gerd; Illy, Stefan; Thumm, Manfred; Jelonnek, John

    2017-04-01

    Multistage depressed collectors are challenges for high-power, high-frequency fusion gyrotrons. Two concepts exist in the literature: (1) unwinding the spent electron beam cyclotron motion utilizing non-adiabatic transitions of magnetic fields and (2) sorting and collecting the electrons using the E × B drift. To facilitate the collection by the drift, the hollow electron beam can be transformed to one or more thin beams before applying the sorting. There are many approaches, which can transform the hollow electron beam to thin beams; among them, two approaches similar to the tilted electric field collectors of traveling wave tubes are conceptually studied in this paper: the first one transforms the hollow circular electron beam to an elongated elliptic beam, and then the thin elliptic beam is collected by the E × B drift; the second one splits an elliptic or a circular electron beam into two arc-shaped sheet beams; these two parts are collected individually. The functionality of these concepts is proven by CST simulations. A model of a three-stage collector for a 170 GHz, 1 MW gyrotron using the latter approach shows 76% collector efficiency while taking secondary electrons and realistic electron beam characteristics into account.

  4. A two-stage adaptive stochastic collocation method on nested sparse grids for multiphase flow in randomly heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Liao, Qinzhuo; Zhang, Dongxiao; Tchelepi, Hamdi

    2017-02-01

    A new computational method is proposed for efficient uncertainty quantification of multiphase flow in porous media with stochastic permeability. For pressure estimation, it combines the dimension-adaptive stochastic collocation method on Smolyak sparse grids and the Kronrod-Patterson-Hermite nested quadrature formulas. For saturation estimation, an additional stage is developed, in which the pressure and velocity samples are first generated by the sparse grid interpolation and then substituted into the transport equation to solve for the saturation samples, to address the low regularity problem of the saturation. Numerical examples are presented for multiphase flow with stochastic permeability fields to demonstrate accuracy and efficiency of the proposed two-stage adaptive stochastic collocation method on nested sparse grids.

  5. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The benefit in both complexity and performance is expected to be most pronounced with large-scale heterogeneous data.« less

  6. Cascade pulse-tube cryocooler using a displacer for efficient work recovery

    NASA Astrophysics Data System (ADS)

    Xu, Jingyuan; Hu, Jianying; Hu, Jiangfeng; Luo, Ercang; Zhang, Limin; Gao, Bo

    2017-09-01

    Expansion work is generally wasted as heat in a pulse-tube cryocooler and thus represents an obstacle to obtaining higher Carnot efficiency. Recovery of this dissipated power is crucial to improvement of these cooling systems, particularly when the cooling temperature is not very low. In this paper, an efficient cascade cryocooler that is capable of recovering acoustic power is introduced. The cryocooler is composed of two coolers and a displacer unit. The displacer, which fulfills both phase modulation and power transmission roles, is sandwiched in the structure by the two coolers. This means that the expansion work from the first stage cooler can then be used by the second stage cooler. The expansion work of the second stage cooler is much lower than the total input work and it is thus not necessary to recover it. Analyses and experiments were conducted to verify the proposed configuration. At an input power of 1249 W, the cascade cryocooler achieved its highest overall relative Carnot efficiency of 37.2% and a cooling power of 371 W at 130 K. When compared with the performance of a traditional pulse-tube cryocooler, the cooling efficiency was improved by 32%.

  7. Progress of High Efficiency Centrifugal Compressor Simulations Using TURBO

    NASA Technical Reports Server (NTRS)

    Kulkarni, Sameer; Beach, Timothy A.

    2017-01-01

    Three-dimensional, time-accurate, and phase-lagged computational fluid dynamics (CFD) simulations of the High Efficiency Centrifugal Compressor (HECC) stage were generated using the TURBO solver. Changes to the TURBO Parallel Version 4 source code were made in order to properly model the no-slip boundary condition along the spinning hub region for centrifugal impellers. A startup procedure was developed to generate a converged flow field in TURBO. This procedure initialized computations on a coarsened mesh generated by the Turbomachinery Gridding System (TGS) and relied on a method of systematically increasing wheel speed and backpressure. Baseline design-speed TURBO results generally overpredicted total pressure ratio, adiabatic efficiency, and the choking flow rate of the HECC stage as compared with the design-intent CFD results of Code Leo. Including diffuser fillet geometry in the TURBO computation resulted in a 0.6 percent reduction in the choking flow rate and led to a better match with design-intent CFD. Diffuser fillets reduced annulus cross-sectional area but also reduced corner separation, and thus blockage, in the diffuser passage. It was found that the TURBO computations are somewhat insensitive to inlet total pressure changing from the TURBO default inlet pressure of 14.7 pounds per square inch (101.35 kilopascals) down to 11.0 pounds per square inch (75.83 kilopascals), the inlet pressure of the component test. Off-design tip clearance was modeled in TURBO in two computations: one in which the blade tip geometry was trimmed by 12 mils (0.3048 millimeters), and another in which the hub flow path was moved to reflect a 12-mil axial shift in the impeller hub, creating a step at the hub. The one-dimensional results of these two computations indicate non-negligible differences between the two modeling approaches.

  8. Development of an efficient anaerobic co-digestion process for garbage, excreta, and septic tank sludge to create a resource recycling-oriented society.

    PubMed

    Sun, Zhao-Yong; Liu, Kai; Tan, Li; Tang, Yue-Qin; Kida, Kenji

    2017-03-01

    In order to develop a resource recycling-oriented society, an efficient anaerobic co-digestion process for garbage, excreta and septic tank sludge was studied based on the quantity of each biomass waste type discharged in Ooki machi, Japan. The anaerobic digestion characteristics of garbage, excreta and 5-fold condensed septic tank sludge (hereafter called condensed sludge) were determined separately. In single-stage mesophilic digestion, the excreta with lower C/N ratios yielded lower biogas volumes and accumulated higher volumes of volatile fatty acid (VFA). On the other hand, garbage allowed for a significantly larger volatile total solid (VTS) digestion efficiency as well as biogas yield by thermophilic digestion. Thus, a two-stage anaerobic co-digestion process consisting of thermophilic liquefaction and mesophilic digestion phases was proposed. In the thermophilic liquefaction of mixed condensed sludge and household garbage (wet mass ratio of 2.2:1), a maximum VTS loading rate of 24g/L/d was achieved. In the mesophilic digestion of mixed liquefied material and excreta (wet mass ratio of 1:1), biogas yield reached approximately 570ml/g-VTS fed with a methane content of 55% at a VTS loading rate of 1.0g/L/d. The performance of the two-stage process was evaluated by comparing it with a single-stage process in which biomass wastes were treated separately. Biogas production by the two-stage process was found to increase by approximately 22.9%. These results demonstrate the effectiveness of a two-stage anaerobic co-digestion process in enhancement of biogas production. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. A Novel Design Approach for Self-Crack-Healing Structural Ceramics with 3D Networks of Healing Activator.

    PubMed

    Osada, Toshio; Kamoda, Kiichi; Mitome, Masanori; Hara, Toru; Abe, Taichi; Tamagawa, Yuki; Nakao, Wataru; Ohmura, Takahito

    2017-12-19

    Self-crack-healing by oxidation of a pre-incorporated healing agent is an essential property of high-temperature structural ceramics for components with stringent safety requirements, such as turbine blades in aircraft engines. Here, we report a new approach for a self-healing design containing a 3D network of a healing activator, based on insight gained by clarifying the healing mechanism. We demonstrate that addition of a small amount of an activator, typically doped MnO localised on the fracture path, selected by appropriate thermodynamic calculation significantly accelerates healing by >6,000 times and significantly lowers the required reaction temperature. The activator on the fracture path exhibits rapid fracture-gap filling by generation of mobile supercooled melts, thus enabling efficient oxygen delivery to the healing agent. Furthermore, the activator promotes crystallisation of the melts and forms a mechanically strong healing oxide. We also clarified that the healing mechanism could be divided to the initial oxidation and additional two stages. Based on bone healing, we here named these stages as inflammation, repair, and remodelling stages, respectively. Our design strategy can be applied to develop new lightweight, self-healing ceramics suitable for use in high- or low-pressure turbine blades in aircraft engines.

  10. A graphical user interface for infant ERP analysis.

    PubMed

    Kaatiala, Jussi; Yrttiaho, Santeri; Forssman, Linda; Perdue, Katherine; Leppänen, Jukka

    2014-09-01

    Recording of event-related potentials (ERPs) is one of the best-suited technologies for examining brain function in human infants. Yet the existing software packages are not optimized for the unique requirements of analyzing artifact-prone ERP data from infants. We developed a new graphical user interface that enables an efficient implementation of a two-stage approach to the analysis of infant ERPs. In the first stage, video records of infant behavior are synchronized with ERPs at the level of individual trials to reject epochs with noncompliant behavior and other artifacts. In the second stage, the interface calls MATLAB and EEGLAB (Delorme & Makeig, Journal of Neuroscience Methods 134(1):9-21, 2004) functions for further preprocessing of the ERP signal itself (i.e., filtering, artifact removal, interpolation, and rereferencing). Finally, methods are included for data visualization and analysis by using bootstrapped group averages. Analyses of simulated and real EEG data demonstrated that the proposed approach can be effectively used to establish task compliance, remove various types of artifacts, and perform representative visualizations and statistical comparisons of ERPs. The interface is available for download from http://www.uta.fi/med/icl/methods/eeg.html in a format that is widely applicable to ERP studies with special populations and open for further editing by users.

  11. Influence Function Learning in Information Diffusion Networks.

    PubMed

    Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le

    2014-06-01

    Can we learn the influence of a set of people in a social network from cascades of information diffusion? This question is often addressed by a two-stage approach: first learn a diffusion model, and then calculate the influence based on the learned model. Thus, the success of this approach relies heavily on the correctness of the diffusion model which is hard to verify for real world data. In this paper, we exploit the insight that the influence functions in many diffusion models are coverage functions, and propose a novel parameterization of such functions using a convex combination of random basis functions. Moreover, we propose an efficient maximum likelihood based algorithm to learn such functions directly from cascade data, and hence bypass the need to specify a particular diffusion model in advance. We provide both theoretical and empirical analysis for our approach, showing that the proposed approach can provably learn the influence function with low sample complexity, be robust to the unknown diffusion models, and significantly outperform existing approaches in both synthetic and real world data.

  12. Novel 3-D free-form surface profilometry for reverse engineering

    NASA Astrophysics Data System (ADS)

    Chen, Liang-Chia; Huang, Zhi-Xue

    2005-01-01

    This article proposes an innovative 3-D surface contouring approach for automatic and accurate free-form surface reconstruction using a sensor integration concept. The study addresses a critical problem in accurate measurement of free-form surfaces by developing an automatic reconstruction approach. Unacceptable measuring accuracy issues are mainly due to the errors arising from the use of inadequate measuring strategies, ending up with inaccurate digitised data and costly post-data processing in Reverse Engineering (RE). This article is thus aimed to develop automatic digitising strategies for ensuring surface reconstruction efficiency, as well as accuracy. The developed approach consists of two main stages, namely the rapid shape identification (RSI) and the automated laser scanning (ALS) for completing 3-D surface profilometry. This developed approach effectively utilises the advantages of on-line geometric information to evaluate the degree of satisfaction of user-defined digitising accuracy under a triangular topological patch. An industrial case study was used to attest the feasibility of the approach.

  13. Treatment of Ammonia Nitrogen Wastewater in Low Concentration by Two-Stage Ozonization.

    PubMed

    Luo, Xianping; Yan, Qun; Wang, Chunying; Luo, Caigui; Zhou, Nana; Jian, Chensheng

    2015-09-23

    Ammonia nitrogen wastewater (about 100 mg/L) was treated by two-stage ozone oxidation method. The effects of ozone flow rate and initial pH on ammonia removal were studied, and the mechanism of ammonia nitrogen removal by ozone oxidation was discussed. After the primary stage of ozone oxidation, the ammonia removal efficiency reached 59.32% and pH decreased to 6.63 under conditions of 1 L/min ozone flow rate and initial pH 11. Then, the removal efficiency could be over 85% (the left ammonia concentration was lower than 15 mg/L) after the second stage, which means the wastewater could have met the national discharge standards of China. Besides, the mechanism of ammonia removal by ozone oxidation was proposed by detecting the products of the oxidation: ozone oxidation directly and ·OH oxidation; ammonia was mainly transformed into NO₃(-)-N, less into NO₂(-)-N, not into N₂.

  14. Design of a Two-stage High-capacity Stirling Cryocooler Operating below 30K

    NASA Astrophysics Data System (ADS)

    Wang, Xiaotao; Dai, Wei; Zhu, Jian; Chen, Shuai; Li, Haibing; Luo, Ercang

    The high capacity cryocooler working below 30K can find many applications such as superconducting motors, superconducting cables and cryopump. Compared to the GM cryocooler, the Stirling cryocooler can achieve higher efficiency and more compact structure. Because of these obvious advantages, we have designed a two stage free piston Stirling cryocooler system, which is driven by a moving magnet linear compressor with an operating frequency of 40 Hz and a maximum 5 kW input electric power. The first stage of the cryocooler is designed to operate in the liquid nitrogen temperature and output a cooling power of 100 W. And the second stage is expected to simultaneously provide a cooling power of 50 W below the temperature of 30 K. In order to achieve the best system efficiency, a numerical model based on the thermoacoustic model was developed to optimize the system operating and structure parameters.

  15. Integration of treatment innovation planning and implementation: strategic process models and organizational challenges.

    PubMed

    Lehman, Wayne E K; Simpson, D Dwayne; Knight, Danica K; Flynn, Patrick M

    2011-06-01

    Sustained and effective use of evidence-based practices in substance abuse treatment services faces both clinical and contextual challenges. Implementation approaches are reviewed that rely on variations of plan-do-study-act (PDSA) cycles, but most emphasize conceptual identification of core components for system change strategies. A two-phase procedural approach is therefore presented based on the integration of Texas Christian University (TCU) models and related resources for improving treatment process and program change. Phase 1 focuses on the dynamics of clinical services, including stages of client recovery (cross-linked with targeted assessments and interventions), as the foundations for identifying and planning appropriate innovations to improve efficiency and effectiveness. Phase 2 shifts to the operational and organizational dynamics involved in implementing and sustaining innovations (including the stages of training, adoption, implementation, and practice). A comprehensive system of TCU assessments and interventions for client and program-level needs and functioning are summarized as well, with descriptions and guidelines for applications in practical settings. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  16. Efficiency and quality of care in nursing homes: an Italian case study.

    PubMed

    Garavaglia, Giulia; Lettieri, Emanuele; Agasisti, Tommaso; Lopez, Silvano

    2011-03-01

    This study investigates efficiency and quality of care in nursing homes. By means of Data Envelopment Analysis (DEA), the efficiency of 40 nursing homes that deliver their services in the north-western area of the Lombardy Region was assessed over a 3-year period (2005-2007). Lombardy is a very peculiar setting, since it is the only Region in Italy where the healthcare industry is organised as a quasi-market, in which the public authority buys health and nursing services from independent providers-establishing a reimbursement system for this purpose. The analysis is conducted by generating bootstrapped DEA efficiency scores for each nursing home (stage one), then regressing those scores on explanatory variables (stage two). Our DEA model employed two input (i.e. costs for health and nursing services and costs for residential services) and three output variables (case mix, extra nursing hours and residential charges). In the second-stage analysis, Tobit regressions and the Kruskall-Wallis tests of hypothesis to the efficiency scores were applied to define what are the factors that affect efficiency: (a) the ownership (private nursing houses outperform their public counterparts); and (b) the capability to implement strategies for labour cost and nursing costs containment, since the efficiency heavily depends upon the alignment of the costs to the public reimbursement system. Lastly, even though the public institutions are less efficient than the private ones, the results suggest that public nursing homes are moving towards their private counterparts, and thus competition is benefiting efficiency.

  17. Nanoengineered CIGS thin films for low cost photovoltaics

    NASA Astrophysics Data System (ADS)

    Eldada, Louay; Taylor, Matthew; Sang, Baosheng; McWilliams, Scott; Oswald, Robert; Stanbery, Billy J.

    2008-08-01

    Low cost manufacturing of Cu(In,Ga)Se2 (CIGS) films for high efficiency photovoltaic devices by the innovative Field-Assisted Simultaneous Synthesis and Transfer (FASST®) process is reported. The FASST® process is a two-stage reactive transfer printing method relying on chemical reaction between two separate precursor films to form CIGS, one deposited on the substrate and the other on a printing plate in the first stage. In the second stage these precursors are brought into intimate contact and rapidly reacted under pressure in the presence of an applied electrostatic field. The method utilizes physical mechanisms characteristic of anodic wafer bonding and rapid thermal annealing, effectively creating a sealed micro-reactor that ensures high material utilization efficiency, direct control of reaction pressure, and low thermal budget. The use of two independent ink-based or PVD-based nanoengineered precursor thin films provides the benefits of independent composition and flexible deposition technique optimization, and eliminates pre-reaction prior to the second stage FASST® synthesis of CIGS. High quality CIGS with large grains on the order of several microns are formed in just several minutes based on compositional and structural analysis by XRF, SIMS, SEM and XRD. Cell efficiencies of 12.2% have been achieved using this method.

  18. CFD modeling of two-stage ignition in a rapid compression machine: Assessment of zero-dimensional approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Gaurav; Raju, Mandhapati P.; Sung, Chih-Jen

    2010-07-15

    In modeling rapid compression machine (RCM) experiments, zero-dimensional approach is commonly used along with an associated heat loss model. The adequacy of such approach has not been validated for hydrocarbon fuels. The existence of multi-dimensional effects inside an RCM due to the boundary layer, roll-up vortex, non-uniform heat release, and piston crevice could result in deviation from the zero-dimensional assumption, particularly for hydrocarbons exhibiting two-stage ignition and strong thermokinetic interactions. The objective of this investigation is to assess the adequacy of zero-dimensional approach in modeling RCM experiments under conditions of two-stage ignition and negative temperature coefficient (NTC) response. Computational fluidmore » dynamics simulations are conducted for n-heptane ignition in an RCM and the validity of zero-dimensional approach is assessed through comparisons over the entire NTC region. Results show that the zero-dimensional model based on the approach of 'adiabatic volume expansion' performs very well in adequately predicting the first-stage ignition delays, although quantitative discrepancy for the prediction of the total ignition delays and pressure rise in the first-stage ignition is noted even when the roll-up vortex is suppressed and a well-defined homogeneous core is retained within an RCM. Furthermore, the discrepancy is pressure dependent and decreases as compressed pressure is increased. Also, as ignition response becomes single-stage at higher compressed temperatures, discrepancy from the zero-dimensional simulations reduces. Despite of some quantitative discrepancy, the zero-dimensional modeling approach is deemed satisfactory from the viewpoint of the ignition delay simulation. (author)« less

  19. Regeneration of pilot-scale ion exchange columns for hexavalent chromium removal.

    PubMed

    Korak, Julie A; Huggins, Richard; Arias-Paic, Miguel

    2017-07-01

    Due to stricter regulations, some drinking water utilities must implement additional treatment processes to meet potable water standards for hexavalent chromium (Cr(VI)), such as the California limit of 10 μg/L. Strong base anion exchange is effective for Cr(VI) removal, but efficient resin regeneration and waste minimization are important for operational, economic and environmental considerations. This study compared multiple regeneration methods on pilot-scale columns on the basis of regeneration efficiency, waste production and salt usage. A conventional 1-Stage regeneration using 2 N sodium chloride (NaCl) was compared to 1) a 2-Stage process with 0.2 N NaCl followed by 2 N NaCl and 2) a mixed regenerant solution with 2 N NaCl and 0.2 N sodium bicarbonate. All methods eluted similar cumulative amounts of chromium with 2 N NaCl. The 2-Stage process eluted an additional 20-30% of chromium in the 0.2 N fraction, but total resin capacity is unaffected if this fraction is recycled to the ion exchange headworks. The 2-Stage approach selectively eluted bicarbonate and sulfate with 0.2 N NaCl before regeneration using 2 N NaCl. Regeneration approach impacted the elution efficiency of both uranium and vanadium. Regeneration without co-eluting sulfate and bicarbonate led to incomplete uranium elution and potential formation of insoluble uranium hydroxides that could lead to long-term resin fouling, decreased capacity and render the resin a low-level radioactive solid waste. Partial vanadium elution occurred during regeneration due to co-eluting sulfate suppressing vanadium release. Waste production and salt usage were comparable for the 1- and 2-Stage regeneration processes with similar operational setpoints with respect to chromium or nitrate elution. Published by Elsevier Ltd.

  20. Design and optimization of a single stage centrifugal compressor for a solar dish-Brayton system

    NASA Astrophysics Data System (ADS)

    Wang, Yongsheng; Wang, Kai; Tong, Zhiting; Lin, Feng; Nie, Chaoqun; Engeda, Abraham

    2013-10-01

    According to the requirements of a solar dish-Brayton system, a centrifugal compressor stage with a minimum total pressure ratio of 5, an adiabatic efficiency above 75% and a surge margin more than 12% needs to be designed. A single stage, which consists of impeller, radial vaned diffuser, 90° crossover and two rows of axial stators, was chosen to satisfy this system. To achieve the stage performance, an impeller with a 6:1 total pressure ratio and an adiabatic efficiency of 90% was designed and its preliminary geometry came from an in-house one-dimensional program. Radial vaned diffuser was applied downstream of the impeller. Two rows of axial stators after 90° crossover were added to guide the flow into axial direction. Since jet-wake flow, shockwave and boundary layer separation coexisted in the impeller-diffuser region, optimization on the radius ratio of radial diffuser vane inlet to impeller exit, diffuser vane inlet blade angle and number of diffuser vanes was carried out at design point. Finally, an optimized centrifugal compressor stage fulfilled the high expectations and presented proper performance. Numerical simulation showed that at design point the stage adiabatic efficiency was 79.93% and the total pressure ratio was 5.6. The surge margin was 15%. The performance map including 80%, 90% and 100% design speed was also presented.

  1. A Unified Approach to the Study of Chemical Reactions in Freshman Chemistry.

    ERIC Educational Resources Information Center

    Cassen, T.; DuBois, Thomas D.

    1982-01-01

    Provides rationale and objectives for presenting chemical reactions in a unified, logical six-stage approach rather than a piecemeal approach. Stages discussed include: introduction, stable electronic configurations and stable oxidation states, reactions between two free elements, ion transfer/proton transfer reactions, double displacement…

  2. Biobased alkylphenols from lignins via a two-step pyrolysis - Hydrodeoxygenation approach.

    PubMed

    de Wild, P J; Huijgen, W J J; Kloekhorst, A; Chowdari, R K; Heeres, H J

    2017-04-01

    Five technical lignins (three organosolv, Kraft and soda lignin) were depolymerised to produce monomeric biobased aromatics, particularly alkylphenols, by a new two-stage thermochemical approach consisting of dedicated pyrolysis followed by catalytic hydrodeoxygenation (HDO) of the resulting pyrolysis oils. Pyrolysis yielded a mixture of guaiacols, catechols and, optionally, syringols in addition to alkylphenols. HDO with heterogeneous catalysts (Ru/C, CoMo/alumina, phosphided NiMO/C) effectively directed the product mixture towards alkylphenols by, among others, demethoxylation. Up to 15wt% monomeric aromatics of which 11wt% alkylphenols was obtained (on the lignin intake) with limited solid formation (<3wt% on lignin oil intake). For comparison, solid Kraft lignin was also directly hydrotreated for simultaneous depolymerisation and deoxygenation resulting in two times more alkylphenols. However, the alkylphenols concentration in the product oil is higher for the two-stage approach. Future research should compare direct hydrotreatment and the two-stage approach in more detail by techno-economic assessments. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. International comparisons of the technical efficiency of the hospital sector: panel data analysis of OECD countries using parametric and non-parametric approaches.

    PubMed

    Varabyova, Yauheniya; Schreyögg, Jonas

    2013-09-01

    There is a growing interest in the cross-country comparisons of the performance of national health care systems. The present work provides a comparison of the technical efficiency of the hospital sector using unbalanced panel data from OECD countries over the period 2000-2009. The estimation of the technical efficiency of the hospital sector is performed using nonparametric data envelopment analysis (DEA) and parametric stochastic frontier analysis (SFA). Internal and external validity of findings is assessed by estimating the Spearman rank correlations between the results obtained in different model specifications. The panel-data analyses using two-step DEA and one-stage SFA show that countries, which have higher health care expenditure per capita, tend to have a more technically efficient hospital sector. Whether the expenditure is financed through private or public sources is not related to the technical efficiency of the hospital sector. On the other hand, the hospital sector in countries with higher income inequality and longer average hospital length of stay is less technically efficient. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  4. Unsupervised algorithms for intrusion detection and identification in wireless ad hoc sensor networks

    NASA Astrophysics Data System (ADS)

    Hortos, William S.

    2009-05-01

    In previous work by the author, parameters across network protocol layers were selected as features in supervised algorithms that detect and identify certain intrusion attacks on wireless ad hoc sensor networks (WSNs) carrying multisensor data. The algorithms improved the residual performance of the intrusion prevention measures provided by any dynamic key-management schemes and trust models implemented among network nodes. The approach of this paper does not train algorithms on the signature of known attack traffic, but, instead, the approach is based on unsupervised anomaly detection techniques that learn the signature of normal network traffic. Unsupervised learning does not require the data to be labeled or to be purely of one type, i.e., normal or attack traffic. The approach can be augmented to add any security attributes and quantified trust levels, established during data exchanges among nodes, to the set of cross-layer features from the WSN protocols. A two-stage framework is introduced for the security algorithms to overcome the problems of input size and resource constraints. The first stage is an unsupervised clustering algorithm which reduces the payload of network data packets to a tractable size. The second stage is a traditional anomaly detection algorithm based on a variation of support vector machines (SVMs), whose efficiency is improved by the availability of data in the packet payload. In the first stage, selected algorithms are adapted to WSN platforms to meet system requirements for simple parallel distributed computation, distributed storage and data robustness. A set of mobile software agents, acting like an ant colony in securing the WSN, are distributed at the nodes to implement the algorithms. The agents move among the layers involved in the network response to the intrusions at each active node and trustworthy neighborhood, collecting parametric values and executing assigned decision tasks. This minimizes the need to move large amounts of audit-log data through resource-limited nodes and locates routines closer to that data. Performance of the unsupervised algorithms is evaluated against the network intrusions of black hole, flooding, Sybil and other denial-of-service attacks in simulations of published scenarios. Results for scenarios with intentionally malfunctioning sensors show the robustness of the two-stage approach to intrusion anomalies.

  5. Gas pollutants removal in a single- and two-stage ejector-venturi scrubber.

    PubMed

    Gamisans, Xavier; Sarrà, Montserrrat; Lafuente, F Javier

    2002-03-29

    The absorption of SO(2) and NH(3) from the flue gas into NaOH and H(2)SO(4) solutions, respectively has been studied using an industrial scale ejector-venturi scrubber. A statistical methodology is presented to characterise the performance of the scrubber by varying several factors such as gas pollutant concentration, air flowrate and absorbing solution flowrate. Some types of venturi tube constructions were assessed, including the use of a two-stage venturi tube. The results showed a strong influence of the liquid scrubbing flowrate on pollutant removal efficiency. The initial pollutant concentration and the gas flowrate had a slight influence. The use of a two-stage venturi tube considerably improved the absorption efficiency, although it increased energy consumption. The results of this study will be applicable to the optimal design of venturi-based absorbers for gaseous pollution control or chemical reactors.

  6. One-Stage versus Two-Stage Repair of Asymmetric Bilateral Cleft Lip: A 20-Year Retrospective Study of Clinical Outcome.

    PubMed

    Chung, Kyung Hoon; Lo, Lun-Jou

    2018-05-01

    Both one- and two-stage approaches have been widely used for patients with asymmetric bilateral cleft lip. There are insufficient long-term outcome data for comparison of these two methods. The purpose of this retrospective study was to compare the clinical outcome over the past 20 years. The senior author's (L.J.L.) database was searched for patients with asymmetric bilateral cleft lip from 1995 to 2015. Qualified patients were divided into two groups: one-stage and two-stage. The postoperative photographs of patients were evaluated subjectively by surgical professionals and laypersons. Ratios of the nasolabial region were calculated for objective analysis. Finally, the revision procedures in the nasolabial area were reviewed. Statistical analyses were performed. A total of 95 consecutive patients were qualified for evaluation. Average follow-up was 13.1 years. A two-stage method was used in 35 percent of the patients, and a one-stage approach was used in 65 percent. All underwent primary nasal reconstruction. Among the satisfaction rating scores, the one-stage repair was rated significantly higher than two-stage reconstruction (p = 0.0001). Long-term outcomes of the two-stage patients and the unrepaired mini-microform deformities were unsatisfactory according to both professional and nonprofessional evaluators. The revision rate was higher in patients with a greater-side complete cleft lip and palate as compared with those without palatal involvement. The results suggested that one-stage repair provided better results with regard to achieving a more symmetric and smooth lip and nose after primary reconstruction. The revision rate was slightly higher in the two-stage patient group. Therapeutic, III.

  7. Two stage sorption of sulfur compounds

    DOEpatents

    Moore, William E.

    1992-01-01

    A two stage method for reducing the sulfur content of exhaust gases is disclosed. Alkali- or alkaline-earth-based sorbent is totally or partially vaporized and introduced into a sulfur-containing gas stream. The activated sorbent can be introduced in the reaction zone or the exhaust gases of a combustor or a gasifier. High efficiencies of sulfur removal can be achieved.

  8. Multipass OPCPA system at 100 kHz pumped by a CPA-free solid-state amplifier.

    PubMed

    Ahrens, J; Prochnow, O; Binhammer, T; Lang, T; Schulz, B; Frede, M; Morgner, U

    2016-04-18

    We present a compact few-cycle 100 kHz OPCPA system pumped by a CPA-free picosecond Nd:YVO4 solid-state amplifier with all-optical synchronization to an ultra-broadband Ti:sapphire oscillator. This pump approach shows an exceptional conversion rate into the second harmonic of almost 78%. Efficient parametric amplification was realized by a two stage double-pass scheme with following chirped mirror compressor. The amount of superfluorescence was measured by an optical cross-correlation. Pulses with a duration of 8.7 fs at energies of 18 µJ are demonstrated. Due to the peak power of 1.26 GW, this simple OPCPA approach forms an ideal high repetition rate driving source for high-order harmonic generation.

  9. Asymmetric Spread of SRBSDV between Rice and Corn Plants by the Vector Sogatella furcifera (Hemiptera: Delphacidae).

    PubMed

    Li, Pei; Li, Fei; Han, Yongqiang; Yang, Lang; Liao, Xiaolan; Hou, Maolin

    2016-01-01

    Plant viruses are mostly transmitted by sucking insects via their piercing behaviors, which may differ due to host plant species and their developmental stages. We characterized the transmission of a fijivirus, southern rice black-streaked dwarf virus (SRBSDV), by the planthopper vector Sogatella furcifera Horváth (Hemiptera: Delphacidae), between rice and corn plants of varying developmental stages. SRBSDV was transmitted from infected rice to uninfected corn plants as efficiently as its transmission between rice plants, while was acquired by S. furcifera nymphs at a much lower rate from infected corn plants than from infected rice plants. We also recorded a high mortality of S. furcifera nymphs on corn plants. It is evident that young stages of both the virus donor and recipient plants added to the transmission efficiency of SRBSDV from rice to corn plants. Feeding behaviors of the vector recorded by electrical penetration graph showed that phloem sap ingestion, the behavioral event that is linked with plant virus acquisition, was impaired on corn plants, which accounts for the high mortality of and low virus acquisition by S. furcifera nymphs on corn plants. Our results reveal an asymmetric spread of SRBSDV between its two host plants and the underlying behavioral mechanism, which is of significance for assessing SRBSDV transmission risks and field epidemiology, and for developing integrated management approaches for SRBSDV disease.

  10. Work Optimization Predicts Accretionary Faulting: An Integration of Physical and Numerical Experiments

    NASA Astrophysics Data System (ADS)

    McBeck, Jessica A.; Cooke, Michele L.; Herbert, Justin W.; Maillot, Bertrand; Souloumiac, Pauline

    2017-09-01

    We employ work optimization to predict the geometry of frontal thrusts at two stages of an evolving physical accretion experiment. Faults that produce the largest gains in efficiency, or change in external work per new fault area, ΔWext/ΔA, are considered most likely to develop. The predicted thrust geometry matches within 1 mm of the observed position and within a few degrees of the observed fault dip, for both the first forethrust and backthrust when the observed forethrust is active. The positions of the second backthrust and forethrust that produce >90% of the maximum ΔWext/ΔA also overlap the observed thrusts. The work optimal fault dips are within a few degrees of the fault dips that maximize the average Coulomb stress. Slip gradients along the detachment produce local elevated shear stresses and high strain energy density regions that promote thrust initiation near the detachment. The mechanical efficiency (Wext) of the system decreases at each of the two simulated stages of faulting and resembles the evolution of experimental force. The higher ΔWext/ΔA due to the development of the first pair relative to the second pair indicates that the development of new thrusts may lead to diminishing efficiency gains as the wedge evolves. The numerical estimates of work consumed by fault propagation overlap the range calculated from experimental force data and crustal faults. The integration of numerical and physical experiments provides a powerful approach that demonstrates the utility of work optimization to predict the development of faults.

  11. Temperature-assisted solute focusing with sequential trap/release zones in isocratic and gradient capillary liquid chromatography: Simulation and experiment

    PubMed Central

    Groskreutz, Stephen R.; Weber, Stephen G.

    2016-01-01

    In this work we characterize the development of a method to enhance temperature-assisted on-column solute focusing (TASF) called two-stage TASF. A new instrument was built to implement two-stage TASF consisting of a linear array of three independent, electronically controlled Peltier devices (thermoelectric coolers, TECs). Samples are loaded onto the chromatographic column with the first two TECs, TEC A and TEC B, cold. In the two-stage TASF approach TECs A and B are cooled during injection. TEC A is heated following sample loading. At some time following TEC A’s temperature rise, TEC B’s temperature is increased from the focusing temperature to a temperature matching that of TEC A. Injection bands are focused twice on-column, first on the initial TEC, e.g. single-stage TASF, then refocused on the second, cold TEC. Our goal is to understand the two-stage TASF approach in detail. We have developed a simple yet powerful digital simulation procedure to model the effect of changing temperature in the two focusing zones on retention, band shape and band spreading. The simulation can predict experimental chromatograms resulting from spatial and temporal temperature programs in combination with isocratic and solvent gradient elution. To assess the two-stage TASF method and the accuracy of the simulation well characterized solutes are needed. Thus, retention factors were measured at six temperatures (25–75 °C) at each of twelve mobile phases compositions (0.05–0.60 acetonitrile/water) for homologs of n-alkyl hydroxylbenzoate esters and n-alkyl p-hydroxyphenones. Simulations accurately reflect experimental results in showing that the two-stage approach improves separation quality. For example, two-stage TASF increased sensitivity for a low retention solute by a factor of 2.2 relative to single-stage TASF and 8.8 relative to isothermal conditions using isocratic elution. Gradient elution results for two-stage TASF were more encouraging. Application of two-stage TASF increased peak height for the least retained solute in the test mixture by a factor of 3.2 relative to single-stage TASF and 22.3 compared to isothermal conditions for an injection four-times the column volume. TASF improved resolution and increased peak capacity; for a 12-minute separation peak capacity increased from 75 under isothermal conditions to 146 using single-stage TASF, and 185 for two-stage TASF. PMID:27836226

  12. Temperature-assisted solute focusing with sequential trap/release zones in isocratic and gradient capillary liquid chromatography: Simulation and experiment.

    PubMed

    Groskreutz, Stephen R; Weber, Stephen G

    2016-11-25

    In this work we characterize the development of a method to enhance temperature-assisted on-column solute focusing (TASF) called two-stage TASF. A new instrument was built to implement two-stage TASF consisting of a linear array of three independent, electronically controlled Peltier devices (thermoelectric coolers, TECs). Samples are loaded onto the chromatographic column with the first two TECs, TEC A and TEC B, cold. In the two-stage TASF approach TECs A and B are cooled during injection. TEC A is heated following sample loading. At some time following TEC A's temperature rise, TEC B's temperature is increased from the focusing temperature to a temperature matching that of TEC A. Injection bands are focused twice on-column, first on the initial TEC, e.g. single-stage TASF, then refocused on the second, cold TEC. Our goal is to understand the two-stage TASF approach in detail. We have developed a simple yet powerful digital simulation procedure to model the effect of changing temperature in the two focusing zones on retention, band shape and band spreading. The simulation can predict experimental chromatograms resulting from spatial and temporal temperature programs in combination with isocratic and solvent gradient elution. To assess the two-stage TASF method and the accuracy of the simulation well characterized solutes are needed. Thus, retention factors were measured at six temperatures (25-75°C) at each of twelve mobile phases compositions (0.05-0.60 acetonitrile/water) for homologs of n-alkyl hydroxylbenzoate esters and n-alkyl p-hydroxyphenones. Simulations accurately reflect experimental results in showing that the two-stage approach improves separation quality. For example, two-stage TASF increased sensitivity for a low retention solute by a factor of 2.2 relative to single-stage TASF and 8.8 relative to isothermal conditions using isocratic elution. Gradient elution results for two-stage TASF were more encouraging. Application of two-stage TASF increased peak height for the least retained solute in the test mixture by a factor of 3.2 relative to single-stage TASF and 22.3 compared to isothermal conditions for an injection four-times the column volume. TASF improved resolution and increased peak capacity; for a 12-min separation peak capacity increased from 75 under isothermal conditions to 146 using single-stage TASF, and 185 for two-stage TASF. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. [Technical efficiency of traditional hospitals and public enterprises in Andalusia (Spain)].

    PubMed

    Herrero Tabanera, Luis; Martín Martín, José Jesús; López del Amo González, Ma del Puerto

    2015-01-01

    To assess the technical efficiency of traditional public hospitals without their own legal identity and subject to administrative law, and that of public enterprise hospitals, with their own legal identities and partly governed by private law, all of them belonging to the taxypayer-funded health system of Andalusia during the period 2005 -2008. The study included the 32 publicly-owned hospitals in Andalusia during the period 2005-2008. The method consisted of two stages. In the first stage, the indices of technical efficiency of the hospitals were calculated using Data Envelopment Analysis, and the change in total factor productivity was estimated using the Malmquist index. The results were compared according to perceived quality, and a sensitivity analysis was conducted through an auxiliary model and bootstrapping. In the second stage, a bivariate analysis was performed between hospital efficiency and organization type. Public enterprises were more efficient than traditional hospitals (on average by over 10%) in each of the study years. Nevertheless, a process of convergence was observed between the two types of organizations because, while the efficiency of traditional hospitals increased slightly (by 0.50%) over the study period, the performance of public enterprises declined by over 2%. The possible reasons for the greater efficiency of public enterprises include their greater budgetary and employment flexibility. However, the convergence process observed points to a process of mutual learning that is not necessarily efficient. Copyright © 2014 SESPAS. Published by Elsevier Espana. All rights reserved.

  14. Solvent system selectivities in countercurrent chromatography using Salicornia gaudichaudiana metabolites as practical example with off-line electrospray mass-spectrometry injection profiling.

    PubMed

    Costa, Fernanda das Neves; Jerz, Gerold; Figueiredo, Fabiana de Souza; Winterhalter, Peter; Leitão, Gilda Guimarães

    2015-03-13

    For the development of an efficient two-stage isolation process for high-speed countercurrent chromatography (HSCCC) with focus on principal metabolites from the ethyl acetate extract of the halophyte plant Salicornia gaudichaudiana, separation selectivities of two different biphasic solvent systems with similar polarities were evaluated using the elution and extrusion approach. Efficiency in isolation of target compounds is determined by the solvent system selectivity and their chronological use in multiple separation steps. The system n-hexane-ethyl acetate-methanol-water (0.5:6:0.5:6, v/v/v/v) resulted in a comprehensive separation of polyphenolic glycosides. The system n-hexane-n-butanol-water (1:1:2, v/v/v) was less universal but was highly efficient in the fractionation of positional isomers such as di-substituted cinnamic acid quinic acid derivatives. Multiple metabolite detection performed on recovered HSCCC tube fractions was done with rapid mass-spectrometry profiling by sequential off-line injections to electrospray mass-spectrometry (ESI-MS/MS). Selective ion traces of metabolites delivered reconstituted preparative HSCCC runs. Molecular weight distribution of target compounds in single HSCCC tube fractions and MS/MS fragment data were available. Chromatographic areas with strong co-elution effects and fractions of pure recoverable compounds were visualized. In total 11 metabolites have been identified and monitored. Result of this approach was a fast isolation protocol for S. gaudichaudiana metabolites using two solvent systems in a strategic sequence. The process could easily be scaled-up to larger lab-scale or industrial recovery. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. [Mechanisms for the increased fertilizer nitrogen use efficiency of rice in wheat-rice rotation system under combined application of inorganic and organic fertilizers].

    PubMed

    Liu, Yi-Ren; Li, Xiang; Yu, Jie; Shen, Qi-Rong; Xu, Yang-Chun

    2012-01-01

    A pot experiment was conducted to study the effects of combined application of organic and inorganic fertilizers on the nitrogen uptake by rice and the nitrogen supply by soil in a wheat-rice rotation system, and approach the mechanisms for the increased fertilizer nitrogen use efficiency of rice under the combined fertilization from the viewpoint of microbiology. Comparing with applying inorganic fertilizers, combined application of organic and inorganic fertilizers decreased the soil microbial biomass carbon and nitrogen and soil mineral nitrogen contents before tillering stage, but increased them significantly from heading to filling stage. Under the combined fertilization, the dynamics of soil nitrogen supply matched best the dynamics of rice nitrogen uptake and utilization, which promoted the nitrogen accumulation in rice plant and the increase of rice yield and biomass, and increased the fertilizer nitrogen use efficiency of rice significantly. Combined application of inorganic and organic fertilizers also promoted the propagation of soil microbes, and consequently, more mineral nitrogen in soil was immobilized by the microbes at rice early growth stage, and the immobilized nitrogen was gradually released at the mid and late growth stages of rice, being able to better satisfy the nitrogen demand of rice in its various growth and development stages.

  16. A Two-Stage Probabilistic Approach to Manage Personal Worklist in Workflow Management Systems

    NASA Astrophysics Data System (ADS)

    Han, Rui; Liu, Yingbo; Wen, Lijie; Wang, Jianmin

    The application of workflow scheduling in managing individual actor's personal worklist is one area that can bring great improvement to business process. However, current deterministic work cannot adapt to the dynamics and uncertainties in the management of personal worklist. For such an issue, this paper proposes a two-stage probabilistic approach which aims at assisting actors to flexibly manage their personal worklists. To be specific, the approach analyzes every activity instance's continuous probability of satisfying deadline at the first stage. Based on this stochastic analysis result, at the second stage, an innovative scheduling strategy is proposed to minimize the overall deadline violation cost for an actor's personal worklist. Simultaneously, the strategy recommends the actor a feasible worklist of activity instances which meet the required bottom line of successful execution. The effectiveness of our approach is evaluated in a real-world workflow management system and with large scale simulation experiments.

  17. Super Boiler: Packed Media/Transport Membrane Boiler Development and Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liss, William E; Cygan, David F

    2013-04-17

    Gas Technology Institute (GTI) and Cleaver-Brooks developed a new gas-fired steam generation system the Super Boiler for increased energy efficiency, reduced equipment size, and reduced emissions. The system consists of a firetube boiler with a unique staged furnace design, a two-stage burner system with engineered internal recirculation and inter-stage cooling integral to the boiler, unique convective pass design with extended internal surfaces for enhanced heat transfer, and a novel integrated heat recovery system to extract maximum energy from the flue gas. With these combined innovations, the Super Boiler technical goals were set at 94% HHV fuel efficiency, operation on naturalmore » gas with <5 ppmv NOx (referenced to 3%O2), and 50% smaller than conventional boilers of similar steam output. To demonstrate these technical goals, the project culminated in the industrial demonstration of this new high-efficiency technology on a 300 HP boiler at Clement Pappas, a juice bottler located in Ontario, California. The Super Boiler combustion system is based on two stage combustion which combines air staging, internal flue gas recirculation, inter-stage cooling, and unique fuel-air mixing technology to achieve low emissions rather than external flue gas recirculation which is most commonly used today. The two-stage combustion provides lower emissions because of the integrated design of the boiler and combustion system which permit precise control of peak flame temperatures in both primary and secondary stages of combustion. To reduce equipment size, the Super Boiler's dual furnace design increases radiant heat transfer to the furnace walls, allowing shorter overall furnace length, and also employs convective tubes with extended surfaces that increase heat transfer by up to 18-fold compared to conventional bare tubes. In this way, a two-pass boiler can achieve the same efficiency as a traditional three or four-pass firetube boiler design. The Super Boiler is consequently up to 50% smaller in footprint, has a smaller diameter, and is up to 50% lower in weight, resulting in very compact design with reduced material cost and labor costs, while requiring less boiler room floor space. For enhanced energy efficiency, the heat recovery system uses a transport membrane condenser (TMC), a humidifying air heater (HAH), and a split-stage economizer to extract maximum energy from the flue gas. The TMC is a new innovation that pulls a major portion of water vapor produced by the combustion process from the flue gases along with its sensible and latent heat. This results in nearly 100% transfer of heat to the boiler feed water. The HAH improves the effectiveness of the TMC, particularly in steam systems that do not have a large amount of cold makeup water. In addition, the HAH humidifies the combustion air to reduce NOx formation. The split-stage economizer preheats boiler feed water in the same way as a conventional economizer, but extracts more heat by working in tandem with the TMC and HAH to reduce flue gas temperature. These components are designed to work synergistically to achieve energy efficiencies of 92-94% which is 10-15% higher than today's typical firetube boilers.« less

  18. Advanced Manned Launch System (AMLS) study

    NASA Technical Reports Server (NTRS)

    Ehrlich, Carl F., Jr.; Potts, Jack; Brown, Jerry; Schell, Ken; Manley, Mary; Chen, Irving; Earhart, Richard; Urrutia, Chuck; Randolph, Ray; Morris, Jim

    1992-01-01

    To assure national leadership in space operations and exploration in the future, NASA must be able to provide cost effective and operationally efficient space transportation. Several NASA studies and the joint NASA/DoD Space Transportation Architecture Studies (STAS) have shown the need for a multi-vehicle space transportation system with designs driven by enhanced operations and low costs. NASA is currently studying an advanced manned launch system (AMLS) approach to transport crew and cargo to the Space Station Freedom. Several single and multiple stage systems from air-breathing to all-rocket concepts are being examined in a series of studies potential replacements for the Space Shuttle launch system in the 2000-2010 time frame. Rockwell International Corporation, under contract to the NASA Langley Research Center, has analyzed a two-stage all-rocket concept to determine whether this class of vehicles is appropriate for the AMLS function. The results of the pre-phase A study are discussed.

  19. High-Payoff Space Transportation Design Approach with a Technology Integration Strategy

    NASA Technical Reports Server (NTRS)

    McCleskey, C. M.; Rhodes, R. E.; Chen, T.; Robinson, J.

    2011-01-01

    A general architectural design sequence is described to create a highly efficient, operable, and supportable design that achieves an affordable, repeatable, and sustainable transportation function. The paper covers the following aspects of this approach in more detail: (1) vehicle architectural concept considerations (including important strategies for greater reusability); (2) vehicle element propulsion system packaging considerations; (3) vehicle element functional definition; (4) external ground servicing and access considerations; and, (5) simplified guidance, navigation, flight control and avionics communications considerations. Additionally, a technology integration strategy is forwarded that includes: (a) ground and flight test prior to production commitments; (b) parallel stage propellant storage, such as concentric-nested tanks; (c) high thrust, LOX-rich, LOX-cooled first stage earth-to-orbit main engine; (d) non-toxic, day-of-launch-loaded propellants for upper stages and in-space propulsion; (e) electric propulsion and aero stage control.

  20. The role of environmental variables on the efficiency of water and sewerage companies: a case study of Chile.

    PubMed

    Molinos-Senante, María; Sala-Garrido, Ramón; Lafuente, Matilde

    2015-07-01

    This paper evaluates the efficiency of water and sewerage companies (WaSCs) by introducing the lack of service quality as undesirable outputs. It also investigates whether the production frontier of WaSCs is overall constant returns to scale (CRS) or variable returns to scale (VRS) by using two different data envelopment analysis models. In a second-stage analysis, we study the influence of exogenous and endogenous variables on WaSC performance by applying non-parametric hypothesis tests. In a pioneering approach, the analysis covers 18 WaSCs from Chile, representing about 90% of the Chilean urban population. The results evidence that the technology of the sample studied is characterized overall by CRS. Peak water demand, the percentage of external workers, and the percentage of unbilled water are the factors affecting the efficiency of WaSCs. From a policy perspective, the integration of undesirable outputs into the assessment of WaSC performance is crucial not to penalize companies that provide high service quality to customers.

  1. Economic savings and costs of periodic mammographic screening in the workplace.

    PubMed

    Griffiths, R I; McGrath, M M; Vogel, V G

    1996-03-01

    This article discusses the costs and benefits of mammographic screening in the workplace. The cost of mammography itself and of diagnostic work-up are two of the largest costs involved. Therefore, the most efficient approach to providing mammography depends on the number of employees receiving mammography; and the diagnostic accuracy of mammography and underlying incidence of breast cancer in the screened population strongly influence the number of suspicious mammograms that are not associated with breast cancer. The health benefit of mammographic screening is due to reduced mortality and morbidity through early detection and more effective treatment, which may also result in economic savings if early-stage cancer is less expensive to treat. However, the total lifetime cost of treating early-stage cancer may be greater than treating late-stage cancer because of improved survival of early-stage patients. Thus, although periodic mammographic screening is not likely to result in overall economic savings, in many populations of working-age women, especially those with identifiable risk factors, screening is cost-effective because the expenditure required to save a year of life through early detection of breast cancer is low compared to other types of health services for which employers commonly pay.

  2. Hierarchical animal movement models for population-level inference

    USGS Publications Warehouse

    Hooten, Mevin B.; Buderman, Frances E.; Brost, Brian M.; Hanks, Ephraim M.; Ivans, Jacob S.

    2016-01-01

    New methods for modeling animal movement based on telemetry data are developed regularly. With advances in telemetry capabilities, animal movement models are becoming increasingly sophisticated. Despite a need for population-level inference, animal movement models are still predominantly developed for individual-level inference. Most efforts to upscale the inference to the population level are either post hoc or complicated enough that only the developer can implement the model. Hierarchical Bayesian models provide an ideal platform for the development of population-level animal movement models but can be challenging to fit due to computational limitations or extensive tuning required. We propose a two-stage procedure for fitting hierarchical animal movement models to telemetry data. The two-stage approach is statistically rigorous and allows one to fit individual-level movement models separately, then resample them using a secondary MCMC algorithm. The primary advantages of the two-stage approach are that the first stage is easily parallelizable and the second stage is completely unsupervised, allowing for an automated fitting procedure in many cases. We demonstrate the two-stage procedure with two applications of animal movement models. The first application involves a spatial point process approach to modeling telemetry data, and the second involves a more complicated continuous-time discrete-space animal movement model. We fit these models to simulated data and real telemetry data arising from a population of monitored Canada lynx in Colorado, USA.

  3. Parallel algorithms for large-scale biological sequence alignment on Xeon-Phi based clusters.

    PubMed

    Lan, Haidong; Chan, Yuandong; Xu, Kai; Schmidt, Bertil; Peng, Shaoliang; Liu, Weiguo

    2016-07-19

    Computing alignments between two or more sequences are common operations frequently performed in computational molecular biology. The continuing growth of biological sequence databases establishes the need for their efficient parallel implementation on modern accelerators. This paper presents new approaches to high performance biological sequence database scanning with the Smith-Waterman algorithm and the first stage of progressive multiple sequence alignment based on the ClustalW heuristic on a Xeon Phi-based compute cluster. Our approach uses a three-level parallelization scheme to take full advantage of the compute power available on this type of architecture; i.e. cluster-level data parallelism, thread-level coarse-grained parallelism, and vector-level fine-grained parallelism. Furthermore, we re-organize the sequence datasets and use Xeon Phi shuffle operations to improve I/O efficiency. Evaluations show that our method achieves a peak overall performance up to 220 GCUPS for scanning real protein sequence databanks on a single node consisting of two Intel E5-2620 CPUs and two Intel Xeon Phi 7110P cards. It also exhibits good scalability in terms of sequence length and size, and number of compute nodes for both database scanning and multiple sequence alignment. Furthermore, the achieved performance is highly competitive in comparison to optimized Xeon Phi and GPU implementations. Our implementation is available at https://github.com/turbo0628/LSDBS-mpi .

  4. Staged combustion with piston engine and turbine engine supercharger

    DOEpatents

    Fischer, Larry E [Los Gatos, CA; Anderson, Brian L [Lodi, CA; O'Brien, Kevin C [San Ramon, CA

    2006-05-09

    A combustion engine method and system provides increased fuel efficiency and reduces polluting exhaust emissions by burning fuel in a two-stage combustion system. Fuel is combusted in a piston engine in a first stage producing piston engine exhaust gases. Fuel contained in the piston engine exhaust gases is combusted in a second stage turbine engine. Turbine engine exhaust gases are used to supercharge the piston engine.

  5. Staged combustion with piston engine and turbine engine supercharger

    DOEpatents

    Fischer, Larry E [Los Gatos, CA; Anderson, Brian L [Lodi, CA; O'Brien, Kevin C [San Ramon, CA

    2011-11-01

    A combustion engine method and system provides increased fuel efficiency and reduces polluting exhaust emissions by burning fuel in a two-stage combustion system. Fuel is combusted in a piston engine in a first stage producing piston engine exhaust gases. Fuel contained in the piston engine exhaust gases is combusted in a second stage turbine engine. Turbine engine exhaust gases are used to supercharge the piston engine.

  6. Advancing Early Detection of Autism Spectrum Disorder by Applying an Integrated Two-Stage Screening Approach

    ERIC Educational Resources Information Center

    Oosterling, Iris J.; Wensing, Michel; Swinkels, Sophie H.; van der Gaag, Rutger Jan; Visser, Janne C.; Woudenberg, Tim; Minderaa, Ruud; Steenhuis, Mark-Peter; Buitelaar, Jan K.

    2010-01-01

    Background: Few field trials exist on the impact of implementing guidelines for the early detection of autism spectrum disorders (ASD). The aims of the present study were to develop and evaluate a clinically relevant integrated early detection programme based on the two-stage screening approach of Filipek et al. (1999), and to expand the evidence…

  7. Investigation of a 4.5-Inch-Mean-Diameter Two-Stage Axial-Flow Turbine Suitable for Auxiliary Power Drives

    NASA Technical Reports Server (NTRS)

    Wong, Robert Y.; Monroe, Daniel E.

    1959-01-01

    The design and experimental investigation of a 4.5-inch-mean-diameter two-stage turbine are presented herein and used to study the effect of size on the efficiency of turbines in the auxiliary power drive class. The results of the experimental investigation indicated that design specific work was obtained at design speed at a total-to-static efficiency of 0.639. At design pressure ratio, design static-pressure distribution through the turbine was obtained with an equivalent specific work output of 33.2 Btu per pound and an efficiency of 0.656. It was found that, in the design of turbines in the auxiliary power drive class, Reynolds number plays an important part in the selection of the design efficiency. Comparison with theoretical efficiencies based on a loss coefficient and velocity diagrams are presented. Close agreement was obtained between theory and experiment when the loss coefficient was adjusted for changes in Reynolds number to the -1/5 power.

  8. Orbital Transfer Vehicle Engine Technology High Velocity Ratio Diffusing Crossover

    NASA Technical Reports Server (NTRS)

    Lariviere, Brian W.

    1992-01-01

    High speed, high efficiency head rise multistage pumps require continuous passage diffusing crossovers to effectively convey the pumped fluid from the exit of one impeller to the inlet of the next impeller. On Rocketdyne's Orbital Transfer Vehicle (OTV), the MK49-F, a three stage high pressure liquid hydrogen turbopump, utilizes a 6.23 velocity ratio diffusing crossover. This velocity ratio approaches the diffusion limits for stable and efficient flow over the operating conditions required by the OTV system. The design of the high velocity ratio diffusing crossover was based on advanced analytical techniques anchored by previous tests of stationary two-dimensional diffusers with steady flow. To secure the design and the analytical techniques, tests were required with the unsteady whirling characteristics produced by an impeller. A tester was designed and fabricated using a 2.85 times scale model of the MK49-F turbopumps first stage, including the inducer, impeller, and the diffusing crossover. Water and air tests were completed to evaluate the large scale turbulence, non-uniform velocity, and non-steady velocity on the pump and crossover head and efficiency. Suction performance tests from 80 percent to 124 percent of design flow were completed in water to assess these pump characteristics. Pump and diffuser performance from the water and air tests were compared with the actual MK49-F test data in liquid hydrogen.

  9. A study of power generation from a low-cost hydrokinetic energy system

    NASA Astrophysics Data System (ADS)

    Davila Vilchis, Juana Mariel

    The kinetic energy in river streams, tidal currents, or other artificial water channels has been used as a feasible source of renewable power through different conversion systems. Thus, hydrokinetic energy conversion systems are attracting worldwide interest as another form of distributed alternative energy. Because these systems are still in early stages of development, the basic approaches need significant research. The main challenges are not only to have efficient systems, but also to convert energy more economically so that the cost-benefit analysis drives the growth of this alternative energy form. One way to view this analysis is in terms of the energy conversion efficiency per unit cost. This study presents a detailed assessment of a prototype hydrokinetic energy system along with power output costs. This experimental study was performed using commercial low-cost blades of 20 in diameter inside a tank with water flow speed up to 1.3 m/s. The work was divided into two stages: (a) a fixed-pitch blade configuration, using a radial permanent magnet generator (PMG), and (b) the same hydrokinetic turbine, with a variable-pitch blade and an axial-flux PMG. The results indicate that even though the efficiency of a simple blade configuration is not high, the power coefficient is in the range of other, more complicated designs/prototypes. Additionally, the low manufacturing and operation costs of this system offer an option for low-cost distributed power applications.

  10. High-Speed 3D Printing of High-Performance Thermosetting Polymers via Two-Stage Curing.

    PubMed

    Kuang, Xiao; Zhao, Zeang; Chen, Kaijuan; Fang, Daining; Kang, Guozheng; Qi, Hang Jerry

    2018-04-01

    Design and direct fabrication of high-performance thermosets and composites via 3D printing are highly desirable in engineering applications. Most 3D printed thermosetting polymers to date suffer from poor mechanical properties and low printing speed. Here, a novel ink for high-speed 3D printing of high-performance epoxy thermosets via a two-stage curing approach is presented. The ink containing photocurable resin and thermally curable epoxy resin is used for the digital light processing (DLP) 3D printing. After printing, the part is thermally cured at elevated temperature to yield an interpenetrating polymer network epoxy composite, whose mechanical properties are comparable to engineering epoxy. The printing speed is accelerated by the continuous liquid interface production assisted DLP 3D printing method, achieving a printing speed as high as 216 mm h -1 . It is also demonstrated that 3D printing structural electronics can be achieved by combining the 3D printed epoxy composites with infilled silver ink in the hollow channels. The new 3D printing method via two-stage curing combines the attributes of outstanding printing speed, high resolution, low volume shrinkage, and excellent mechanical properties, and provides a new avenue to fabricate 3D thermosetting composites with excellent mechanical properties and high efficiency toward high-performance and functional applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Flexible sequential designs for multi-arm clinical trials.

    PubMed

    Magirr, D; Stallard, N; Jaki, T

    2014-08-30

    Adaptive designs that are based on group-sequential approaches have the benefit of being efficient as stopping boundaries can be found that lead to good operating characteristics with test decisions based solely on sufficient statistics. The drawback of these so called 'pre-planned adaptive' designs is that unexpected design changes are not possible without impacting the error rates. 'Flexible adaptive designs' on the other hand can cope with a large number of contingencies at the cost of reduced efficiency. In this work, we focus on two different approaches for multi-arm multi-stage trials, which are based on group-sequential ideas, and discuss how these 'pre-planned adaptive designs' can be modified to allow for flexibility. We then show how the added flexibility can be used for treatment selection and sample size reassessment and evaluate the impact on the error rates in a simulation study. The results show that an impressive overall procedure can be found by combining a well chosen pre-planned design with an application of the conditional error principle to allow flexible treatment selection. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    PubMed

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  13. Three-stage sorption type cryogenic refrigeration systems and methods employing heat regeneration

    NASA Technical Reports Server (NTRS)

    Bard, Steven (Inventor); Jones, Jack A. (Inventor)

    1992-01-01

    A three-stage sorption type cryogenic refrigeration system, each stage containing a fluid having a respectively different boiling point, is presented. Each stage includes a compressor in which a respective fluid is heated to be placed in a high pressure gaseous state. The compressor for that fluid which is heated to the highest temperature is enclosed by the other two compressors to permit heat to be transferred from the inner compressor to the surrounding compressors. The system may include two sets of compressors, each having the structure described above, with the interior compressors of the two sets coupled together to permit selective heat transfer therebetween, resulting in more efficient utilization of input power.

  14. Plasma gasification of refuse derived fuel in a single-stage system using different gasifying agents.

    PubMed

    Agon, N; Hrabovský, M; Chumak, O; Hlína, M; Kopecký, V; Masláni, A; Bosmans, A; Helsen, L; Skoblja, S; Van Oost, G; Vierendeels, J

    2016-01-01

    The renewable evolution in the energy industry and the depletion of natural resources are putting pressure on the waste industry to shift towards flexible treatment technologies with efficient materials and/or energy recovery. In this context, a thermochemical conversion method of recent interest is plasma gasification, which is capable of producing syngas from a wide variety of waste streams. The produced syngas can be valorized for both energetic (heat and/or electricity) and chemical (ammonia, hydrogen or liquid hydrocarbons) end-purposes. This paper evaluates the performance of experiments on a single-stage plasma gasification system for the treatment of refuse-derived fuel (RDF) from excavated waste. A comparative analysis of the syngas characteristics and process yields was done for seven cases with different types of gasifying agents (CO2+O2, H2O, CO2+H2O and O2+H2O). The syngas compositions were compared to the thermodynamic equilibrium compositions and the performance of the single-stage plasma gasification of RDF was compared to that of similar experiments with biomass and to the performance of a two-stage plasma gasification process with RDF. The temperature range of the experiment was from 1400 to 1600 K and for all cases, a medium calorific value syngas was produced with lower heating values up to 10.9 MJ/Nm(3), low levels of tar, high levels of CO and H2 and which composition was in good agreement to the equilibrium composition. The carbon conversion efficiency ranged from 80% to 100% and maximum cold gas efficiency and mechanical gasification efficiency of respectively 56% and 95%, were registered. Overall, the treatment of RDF proved to be less performant than that of biomass in the same system. Compared to a two-stage plasma gasification system, the produced syngas from the single-stage reactor showed more favourable characteristics, while the recovery of the solid residue as a vitrified slag is an advantage of the two-stage set-up. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Enhancement of ultrasonic disintegration of sewage sludge by aeration.

    PubMed

    Zhao, He; Zhang, Panyue; Zhang, Guangming; Cheng, Rong

    2016-04-01

    Sonication is an effective way for sludge disintegration, which can significantly improve the efficiency of anaerobic digestion to reduce and recycle use of sludge. But high energy consumption limits the wide application of sonication. In order to improve ultrasonic sludge disintegration efficiency and reduce energy consumption, aeration was introduced. Results showed that sludge disintegration efficiency was improved significantly by combining aeration with ultrasound. The aeration flow rate, gas bubble size, ultrasonic density and aeration timing had impacts on sludge disintegration efficiency. Aeration that used in later stage of ultrasonic irradiation with low aeration flow rate, small gas bubbles significantly improved ultrasonic disintegration sludge efficiency. At the optimal conditions of 0.4 W/mL ultrasonic irradiation density, 30 mL/min of aeration flow rate, 5 min of aeration in later stage and small gas bubbles, ultrasonic sludge disintegration efficiency was increased by 45% and one third of ultrasonic energy was saved. This approach will greatly benefit the application of ultrasonic sludge disintegration and strongly promote the treatment and recycle of wastewater sludge. Copyright © 2015. Published by Elsevier B.V.

  16. Flame tube parametric studies for control of fuel bound nitrogen using rich-lean two-stage combustion

    NASA Technical Reports Server (NTRS)

    Schultz, D. F.; Wolfbrandt, G.

    1980-01-01

    An experimental parametric study of rich-lean two-stage combustion in a flame tube is described and approaches for minimizing the conversion of fuel-bound nitrogen to nitrogen oxides in a premixed, homogeneous combustion system are evaluated. Air at 672 K and 0.48 MPa was premixed with fuel blends of propane, toluene, and pyridine at primary equivalence ratios ranging from 0.5 to 2.0 and secondary equivalence ratios of 0.5 to 0.7. Distillates of SRC-II, a coal syncrude, were also tested. The blended fuels were proportioned to vary fuel hydrogen composition from 9.0 to 18.3 weight percent and fuel nitrogen composition from zero to 1.5 weight percent. Rich-lean combustion proved effective in reducing fuel nitrogen to NO sub x conversion; conversion rates up to 10 times lower than those normally produced by single-stage combustion were achieved. The optimum primary equivalence ratio, where the least NO sub x was produced and combustion efficiency was acceptable, shifted between 1.4 and 1.7 with changes in fuel nitrogen content and fuel hydrogen content. Increasing levels of fuel nitrogen content lowered the conversion rate, but not enough to avoid higher NO sub x emissions as fuel nitrogen increased.

  17. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts.

    PubMed

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-27

    The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.

  18. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    PubMed Central

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-01

    Objective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Results: Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future. PMID:21197345

  19. Embryos aggregation improves development and imprinting gene expression in mouse parthenogenesis.

    PubMed

    Bai, Guang-Yu; Song, Si-Hang; Wang, Zhen-Dong; Shan, Zhi-Yan; Sun, Rui-Zhen; Liu, Chun-Jia; Wu, Yan-Shuang; Li, Tong; Lei, Lei

    2016-04-01

    Mouse parthenogenetic embryonic stem cells (PgESCs) could be applied to study imprinting genes and are used in cell therapy. Our previous study found that stem cells established by aggregation of two parthenogenetic embryos at 8-cell stage (named as a2 PgESCs) had a higher efficiency than that of PgESCs, and the paternal expressed imprinting genes were observably upregulated. Therefore, we propose that increasing the number of parthenogenetic embryos in aggregation may improve the development of parthenogenetic mouse and imprinting gene expression of PgESCs. To verify this hypothesis, we aggregated four embryos together at the 4-cell stage and cultured to the blastocyst stage (named as 4aPgB). qPCR detection showed that the expression of imprinting genes Igf2, Mest, Snrpn, Igf2r, H19, Gtl2 in 4aPgB were more similar to that of fertilized blastocyst (named as fB) compared to 2aPgB (derived from two 4-cell stage parthenogenetic embryos aggregation) or PgB (single parthenogenetic blastocyst). Post-implantation development of 4aPgB extended to 11 days of gestation. The establishment efficiency of GFP-a4 PgESCs which derived from GFP-4aPgB is 62.5%. Moreover, expression of imprinting genes Igf2, Mest, Snrpn, notably downregulated and approached the level of that in fertilized embryonic stem cells (fESCs). In addition, we acquired a 13.5-day fetus totally derived from GFP-a4 PgESCs with germline contribution by 8-cell under zona pellucida (ZP) injection. In conclusion, four embryos aggregation improves parthenogenetic development, and compensates imprinting genes expression in PgESCs. It implied that a4 PgESCs could serve as a better scientific model applied in translational medicine and imprinting gene study. © 2016 Japanese Society of Developmental Biologists.

  20. Tackling regional health inequalities in france by resource allocation : a case for complementary instrumental and process-based approaches?

    PubMed

    Bellanger, Martine M; Jourdain, Alain

    2004-01-01

    This article aims to evaluate the results of two different approaches underlying the attempts to reduce health inequalities in France. In the 'instrumental' approach, resource allocation is based on an indicator to assess the well-being or the quality of life associated with healthcare provision, the argument being that additional resources would respond to needs that could then be treated quickly and efficiently. This governs the distribution of regional hospital budgets. In the second approach, health professionals and users in a given region are involved in a consensus process to define those priorities to be included in programme formulation. This 'procedural' approach is employed in the case of the regional health programmes. In this second approach, the evaluation of the results runs parallel with an analysis of the process using Rawlsian principles, whereas the first approach is based on the classical economic model.At this stage, a pragmatic analysis based on both the comparison of regional hospital budgets during the period 1992-2003 (calculated using a 'RAWP [resource allocation working party]-like' formula) and the evolution of regional health policies through the evaluation of programmes for the prevention of suicide, alcohol-related diseases and cancers provides a partial assessment of the impact of the two types of approaches, the second having a greater effect on the reduction of regional inequalities.

  1. Two stage to orbit design

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A preliminary design of a two-stage to orbit vehicle was conducted with the requirements to carry a 10,000 pound payload into a 300 mile low-earth orbit using an airbreathing first stage, and to take off and land unassisted on a 15,000 foot runway. The goal of the design analysis was to produce the most efficient vehicle in size and weight which could accomplish the mission requirements. Initial parametric analysis indicated that the weight of the orbiter and the transonic performance of the system were the two parameters that had the largest impact on the design. The resulting system uses a turbofan ramjet powered first stage to propel a scramjet and rocket powered orbiter to the stage point of Mach 6 to 6.5 at an altitude of 90,000 ft.

  2. Treatment of Ammonia Nitrogen Wastewater in Low Concentration by Two-Stage Ozonization

    PubMed Central

    Luo, Xianping; Yan, Qun; Wang, Chunying; Luo, Caigui; Zhou, Nana; Jian, Chensheng

    2015-01-01

    Ammonia nitrogen wastewater (about 100 mg/L) was treated by two-stage ozone oxidation method. The effects of ozone flow rate and initial pH on ammonia removal were studied, and the mechanism of ammonia nitrogen removal by ozone oxidation was discussed. After the primary stage of ozone oxidation, the ammonia removal efficiency reached 59.32% and pH decreased to 6.63 under conditions of 1 L/min ozone flow rate and initial pH 11. Then, the removal efficiency could be over 85% (the left ammonia concentration was lower than 15 mg/L) after the second stage, which means the wastewater could have met the national discharge standards of China. Besides, the mechanism of ammonia removal by ozone oxidation was proposed by detecting the products of the oxidation: ozone oxidation directly and ·OH oxidation; ammonia was mainly transformed into NO3−-N, less into NO2−-N, not into N2. PMID:26404353

  3. Effects of reset stators and a rotating, grooved stator hub on performance of a 1.92-pressure-ratio compressor stage

    NASA Technical Reports Server (NTRS)

    Lewis, G. W., Jr.; Urasek, D. C.; Reid, L.

    1977-01-01

    The overall performance and blade-element performance of a transonic fan stage are presented for two modified test configurations and are compared with the unmodified stage. Tests were conducted with reset stators 2 deg open and reset stators with a rotating grooved stator hub. Detailed radial and circumferential (behind stator) surveys of the flow conditions were made over the stable operating range at rotative speeds of 70, 90, and 100 percent of design speed. Reset stator blade tests indicated a small increase in stage efficiency, pressure ratio, and maximum weight flow at each speed. Performance with reset stators and a rotating, grooved stator hub resulted in an additional increase in stage efficiency and pressure ratio at all speeds. The rotating grooved stator hub reduced hub losses considerably.

  4. Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.

    PubMed

    Li, Xianwei; Gao, Huijun

    2015-10-01

    Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.

  5. Development of a 4 K Separate Two-Stage Pulse Tube Refrigerator with High Efficiency

    NASA Astrophysics Data System (ADS)

    Qiu, L. M.; He, Y. L.; Gan, Z. H.; Chen, G. B.

    2006-04-01

    Compared to the traditional 4 K cryocoolers, the separate 4 K pulse tube refrigerator (PTR) consists of two independent PTRs, which are thermally connected between the cold end of the first stage and some middle position of the second stage regenerator. It is possible to use different frequency, valve timing, phase shifter and even compressor for each stage for better cooling performance. A 4 K separate two-stage PTR was designed and manufactured. The first stage was separately optimized. A minimum temperature of 12.6 K and cooling capacity of 59.0 W at 40 K was achieved for the first stage by adding some Er3Ni at the cold part of the regenerator. An experimental investigation of valve timing effects on the cooling performance of the 4 K separate two-stage PTR is reported. The experiments show that the optimization of valve timing can considerably improve the cooling performance of the PTR. Cooling capacity of 0.59 W at 4.2 K and 15.4 W at 37.0 K were achieved with an actual input power of 6.6 kW. Effect of frequency on the performance of the separate two-stage PTR is also presented.

  6. Kilauea volcano: the degassing of a hot spot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerlach, T.M.

    1986-03-01

    Hot spots such as Kilauea volcano can degas by a one-stage eruptive process or a two-stage process involving eruptive and noneruptive degassing. One stage degassing occurs during sustained summit eruptions and causes a direct environmental impact. Although generally less efficient than the one-stage degassing process, two stage degassing can cause 1 to 2 orders of magnitude greater impact in just a few hours during flank eruptions. Hot spot volcanos with resupplied crustal magma chambers may be capable of maintaining an equivalent impact from CO/sub 2/ and S outgassing during both eruptive and noneruptive periods. On average, a hot spot volcanomore » such as Kilauea is a minor polluter compared to man.« less

  7. A two-stage rule-constrained seedless region growing approach for mandibular body segmentation in MRI.

    PubMed

    Ji, Dong Xu; Foong, Kelvin Weng Chiong; Ong, Sim Heng

    2013-09-01

    Extraction of the mandible from 3D volumetric images is frequently required for surgical planning and evaluation. Image segmentation from MRI is more complex than CT due to lower bony signal-to-noise. An automated method to extract the human mandible body shape from magnetic resonance (MR) images of the head was developed and tested. Anonymous MR images data sets of the head from 12 subjects were subjected to a two-stage rule-constrained region growing approach to derive the shape of the body of the human mandible. An initial thresholding technique was applied followed by a 3D seedless region growing algorithm to detect a large portion of the trabecular bone (TB) regions of the mandible. This stage is followed with a rule-constrained 2D segmentation of each MR axial slice to merge the remaining portions of the TB regions with lower intensity levels. The two-stage approach was replicated to detect the cortical bone (CB) regions of the mandibular body. The TB and CB regions detected from the preceding steps were merged and subjected to a series of morphological processes for completion of the mandibular body region definition. Comparisons of the accuracy of segmentation between the two-stage approach, conventional region growing method, 3D level set method, and manual segmentation were made with Jaccard index, Dice index, and mean surface distance (MSD). The mean accuracy of the proposed method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of CRG is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The mean accuracy of the 3D level set method is [Formula: see text] for Jaccard index, [Formula: see text] for Dice index, and [Formula: see text] mm for MSD. The proposed method shows improvement in accuracy over CRG and 3D level set. Accurate segmentation of the body of the human mandible from MR images is achieved with the proposed two-stage rule-constrained seedless region growing approach. The accuracy achieved with the two-stage approach is higher than CRG and 3D level set.

  8. Cost inefficiency in Washington hospitals: a stochastic frontier approach using panel data.

    PubMed

    Li, T; Rosenman, R

    2001-06-01

    We analyze a sample of Washington State hospitals with a stochastic frontier panel data model, specifying the cost function as a generalized Leontief function which, according to a Hausman test, performs better in this case than the translog form. A one-stage FGLS estimation procedure which directly models the inefficiency effects improves the efficiency of our estimates. We find that hospitals with higher casemix indices or more beds are less efficient while for-profit hospitals and those with higher proportion of Medicare patient days are more efficient. Relative to the most efficient hospital, the average hospital is only about 67% efficient.

  9. Effective surveillance strategies following a potential classical Swine Fever incursion in a remote wild pig population in North-Western Australia.

    PubMed

    Leslie, E; Cowled, B; Graeme Garner, M; Toribio, J-A L M L; Ward, M P

    2014-10-01

    Early disease detection and efficient methods of proving disease freedom can substantially improve the response to incursions of important transboundary animal diseases in previously free regions. We used a spatially explicit, stochastic disease spread model to simulate the spread of classical swine fever in wild pigs in a remote region of northern Australia and to assess the performance of disease surveillance strategies to detect infection at different time points and to delineate the size of the resulting outbreak. Although disease would likely be detected, simple random sampling was suboptimal. Radial and leapfrog sampling improved the effectiveness of surveillance at various stages of the simulated disease incursion. This work indicates that at earlier stages, radial sampling can reduce epidemic length and achieve faster outbreak delineation and control, but at later stages leapfrog sampling will outperform radial sampling in relation to supporting faster disease control with a less-extensive outbreak area. Due to the complexity of wildlife population dynamics and group behaviour, a targeted approach to surveillance needs to be implemented for the efficient use of resources and time. Using a more situation-based surveillance approach and accounting for disease distribution and the time period over which an epidemic has occurred is the best way to approach the selection of an appropriate surveillance strategy. © 2013 Blackwell Verlag GmbH.

  10. Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.

    PubMed

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-05-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Structural damage continuous monitoring by using a data driven approach based on principal component analysis and cross-correlation analysis

    NASA Astrophysics Data System (ADS)

    Camacho-Navarro, Jhonatan; Ruiz, Magda; Villamizar, Rodolfo; Mujica, Luis; Moreno-Beltrán, Gustavo; Quiroga, Jabid

    2017-05-01

    Continuous monitoring for damage detection in structural assessment comprises implementation of low cost equipment and efficient algorithms. This work describes the stages involved in the design of a methodology with high feasibility to be used in continuous damage assessment. Specifically, an algorithm based on a data-driven approach by using principal component analysis and pre-processing acquired signals by means of cross-correlation functions, is discussed. A carbon steel pipe section and a laboratory tower were used as test structures in order to demonstrate the feasibility of the methodology to detect abrupt changes in the structural response when damages occur. Two types of damage cases are studied: crack and leak for each structure, respectively. Experimental results show that the methodology is promising in the continuous monitoring of real structures.

  12. Experimental Results of the First Two Stages of an Advanced Transonic Core Compressor Under Isolated and Multi-Stage Conditions

    NASA Technical Reports Server (NTRS)

    Prahst, Patricia S.; Kulkarni, Sameer; Sohn, Ki H.

    2015-01-01

    NASA's Environmentally Responsible Aviation (ERA) Program calls for investigation of the technology barriers associated with improved fuel efficiency of large gas turbine engines. Under ERA the task for a High Pressure Ratio Core Technology program calls for a higher overall pressure ratio of 60 to 70. This mean that the HPC would have to almost double in pressure ratio and keep its high level of efficiency. The challenge is how to match the corrected mass flow rate of the front two supersonic high reaction and high corrected tip speed stages with a total pressure ratio of 3.5. NASA and GE teamed to address this challenge by using the initial geometry of an advanced GE compressor design to meet the requirements of the first 2 stages of the very high pressure ratio core compressor. The rig was configured to run as a 2 stage machine, with Strut and IGV, Rotor 1 and Stator 1 run as independent tests which were then followed by adding the second stage. The goal is to fully understand the stage performances under isolated and multi-stage conditions and fully understand any differences and provide a detailed aerodynamic data set for CFD validation. Full use was made of steady and unsteady measurement methods to isolate fluid dynamics loss source mechanisms due to interaction and endwalls. The paper will present the description of the compressor test article, its predicted performance and operability, and the experimental results for both the single stage and two stage configurations. We focus the detailed measurements on 97 and 100 of design speed at 3 vane setting angles.

  13. The evaluation of clinical and cost outcomes associated with earlier initiation of insulin in patients with type 2 diabetes mellitus.

    PubMed

    Smolen, Harry J; Murphy, Daniel R; Gahn, James C; Yu, Xueting; Curtis, Bradley H

    2014-09-01

    The treatment for patients with type 2 diabetes mellitus (T2DM) follows a stepwise progression. As a treatment loses its effectiveness, it is typically replaced with a more complex and frequently more costly treatment. Eventually this progression leads to the use of basal insulin typically with concomitant treatments (e.g., metformin, a GLP-1 RA [glucagon-like peptide-1 receptor agonist], a TZD [thiazolidinedione] or a DPP-4i [dipeptidyl peptidase 4 inhibitor]) and, ultimately, to basal-bolus insulin in some forms. As the cost of oral antidiabetics (OADs) and noninsulin injectables have approached, and in some cases exceeded, the cost of insulin, we reexamined the placement of insulin in T2DM treatment progression. Our hypothesis was that earlier use of insulin produces clinical and cost benefits due to its superior efficacy and treatment scalability at an acceptable cost when considered over a 5-year period. To (a) estimate clinical and payer cost outcomes of initiating insulin treatment for patients with T2DM earlier in their treatment progression and (b) estimate clinical and payer cost outcomes resulting from delays in escalating treatment for T2DM when indicated by patient hemoglobin A1c levels. We developed a Monte Carlo microsimulation model to estimate patients reaching target A1c, diabetes-related complications, mortality, and associated costs under various treatment strategies for newly diagnosed patients with T2DM. Treatment efficacies were modeled from results of randomized clinical trials, including the time and rate of A1c drift. A typical treatment progression was selected based on the American Diabetes Association and the European Association for the Study of Diabetes guidelines as the standard of care (SOC). Two treatment approaches were evaluated: two-stage insulin (basal plus antidiabetics followed by biphasic plus metformin) and single-stage insulin (biphasic plus metformin). For each approach, we analyzed multiple strategies. For each analysis, treatment steps were sequentially and cumulatively removed from the SOC until only the insulin steps remained. Delays in escalating treatment were evaluated by increasing the minimum time on a treatment within each strategy. The analysis time frame was 5 years. Relative to SOC, the two-stage insulin approach resulted in 0.10% to 1.79% more patients achieving target A1c (<7.0%), at incremental costs of $95 to $3,267. (The ranges are due to the different strategies within the approach.) With the single-stage approach, 0.50% to 2.63% more patients achieved the target A1c compared with SOC at an incremental cost of -$1,642 to $1,177. Major diabetes-related complications were reduced by 0.38% to 17.46% using the two-stage approach and 0.72% to 25.92% using the single-stage approach. Severe hypoglycemia increased by 17.97% to 60.43% using the two-stage approach and 6.44% to 68.87% using the single-stage approach. In the base case scenario, the minimum time on a specific treatment was 3 months. When the minimum time on each treatment was increased to 12 months (i.e., delayed), patients reaching A1c targets were reduced by 57%, complications increased by 13% to 76%, and mortality increased by 8% over 5 years when compared with the base case for the SOC. However, severe hypoglycemic events were reduced by 83%. As insulin was advanced earlier in therapy in the two-stage and single-stage approaches, patients reaching their A1c targets increased, severe hypoglycemic events increased, and diabetes-related complications and mortality decreased. Cost savings were estimated for 3 (of 4) strategies in the single-stage approach. Delays in treatment escalation substantially reduced patients reaching target A1c levels and increased the occurrence of major nonhypoglycemic diabetic complications. With the exception of substantial increases in severe hypoglycemic events, earlier use of insulin mitigates the clinical consequences of these delays.

  14. Simultaneous above and below approach to giant pituitary adenomas: surgical strategies and long-term follow-up

    PubMed Central

    D’Ambrosio, Anthony L.; Grobelny, Bartosz T.; Freda, Pamela U.; Wardlaw, Sharon; Bruce, Jeffrey N.

    2012-01-01

    Introduction Giant pituitary adenomas of excessive size, fibrous consistency or unfavorable geometric configuration may be unresectable through conventional operative approaches. We present our select case series for operative resection and long-term follow-up for these unusual tumors, employing both a staged procedure and a combined transsphenoidal-transcranial above and below approach. Method A retrospective chart review was performed on patients operated via the staged, and combined approaches by the senior author (J.N·B.). Pre-operative characteristics and postoperative outcomes were reviewed. A detailed description of the operative technique and perioperative management is provided. Results Between 1993 and 1996, two patients harboring giant pituitary adenomas underwent an intentionally staged resection, and between 1997 and 2006, nine patients harboring giant pituitary adenomas underwent surgery via a single-stage above and below approach. Nine patients (82%) presented with non-secreting adenomas and two patients (18%) presented with prolactinomas refractory to medical management. Gross total resection was achieved in six patients (55%), near total resection in 1 (9%), and subtotal removal in 4 (36%). Seven patients (64%) experienced visual improvement postoperatively and no major complications occurred. Long-term follow-up averaged 51.6 months. Panhypopituitarism was observed in four patients, partial hypopituitarism in four, persistent DI in two, and persistent SIADH in one. Conclusions The addition of a transcranial component to the transsphenoidal approach offers additional visualization of critical neurovascular structures during giant pituitary adenoma resection. Complications rates are similar to other series in which complex pituitary adenomas are resected by other means. The above and below approach is both safe and effective and the immediate and long-term advantages of a single-stage approach justify its utility in this select group of patients. PMID:19242807

  15. Computational Fluid Dynamics (CFD) Analysis for the Reduction of Impeller Discharge Flow Distortion

    NASA Technical Reports Server (NTRS)

    Garcia, R.; McConnaughey, P. K.; Eastland, A.

    1993-01-01

    The use of Computational Fluid Dynamics (CFD) in the design and analysis of high performance rocket engine pumps has increased in recent years. This increase has been aided by the activities of the Marshall Space Flight Center (MSFC) Pump Stage Technology Team (PSTT). The team's goals include assessing the accuracy and efficiency of several methodologies and then applying the appropriate methodology(s) to understand and improve the flow inside a pump. The PSTT's objectives, team membership, and past activities are discussed in Garcia1 and Garcia2. The PSTT is one of three teams that form the NASA/MSFC CFD Consortium for Applications in Propulsion Technology (McConnaughey3). The PSTT first applied CFD in the design of the baseline consortium impeller. This impeller was designed for the Space Transportation Main Engine's (STME) fuel turbopump. The STME fuel pump was designed with three impeller stages because a two-stage design was deemed to pose a high developmental risk. The PSTT used CFD to design an impeller whose performance allowed for a two-stage STME fuel pump design. The availability of this design would have lead to a reduction in parts, weight, and cost had the STME reached production. One sample of the baseline consortium impeller was manufactured and tested in a water rig. The test data showed that the impeller performance was as predicted and that a two-stage design for the STME fuel pump was possible with minimal risk. The test data also verified another CFD predicted characteristic of the design that was not desirable. The classical 'jet-wake' pattern at the impeller discharge was strengthened by two aspects of the design: by the high head coefficient necessary for the required pressure rise and by the relatively few impeller exit blades, 12, necessary to reduce manufacturing cost. This 'jet-wake pattern produces an unsteady loading on the diffuser vanes and has, in past rocket engine programs, lead to diffuser structural failure. In industrial applications, this problem is typically avoided by increasing the space between the impeller and the diffuser to allow the dissipation of this pattern and, hence, the reduction of diffuser vane unsteady loading. This approach leads to small performance losses and, more importantly in rocket engine applications, to significant increases in the pump's size and weight. This latter consideration typically makes this approach unacceptable in high performance rocket engines.

  16. Identifying key performance indicators for nursing and midwifery care using a consensus approach.

    PubMed

    McCance, Tanya; Telford, Lorna; Wilson, Julie; Macleod, Olive; Dowd, Audrey

    2012-04-01

    The aim of this study was to gain consensus on key performance indicators that are appropriate and relevant for nursing and midwifery practice in the current policy context. There is continuing demand to demonstrate effectiveness and efficiency in health and social care and to communicate this at boardroom level. Whilst there is substantial literature on the use of clinical indicators and nursing metrics, there is less evidence relating to indicators that reflect the patient experience. A consensus approach was used to identify relevant key performance indicators. A nominal group technique was used comprising two stages: a workshop involving all grades of nursing and midwifery staff in two HSC trusts in Northern Ireland (n = 50); followed by a regional Consensus Conference (n = 80). During the workshop, potential key performance indicators were identified. This was used as the basis for the Consensus Conference, which involved two rounds of consensus. Analysis was based on aggregated scores that were then ranked. Stage one identified 38 potential indicators and stage two prioritised the eight top-ranked indicators as a core set for nursing and midwifery. The relevance and appropriateness of these indicators were confirmed with nurses and midwives working in a range of settings and from the perspective of service users. The eight indicators identified do not conform to the majority of other nursing metrics generally reported in the literature. Furthermore, they are strategically aligned to work on the patient experience and are reflective of the fundamentals of nursing and midwifery practice, with the focus on person-centred care. Nurses and midwives have a significant contribution to make in determining the extent to which these indicators are achieved in practice. Furthermore, measurement of such indicators provides an opportunity to evidence of the unique impact of nursing/midwifery care on the patient experience. © 2011 Blackwell Publishing Ltd.

  17. Rules and mechanisms for efficient two-stage learning in neural circuits.

    PubMed

    Teşileanu, Tiberiu; Ölveczky, Bence; Balasubramanian, Vijay

    2017-04-04

    Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in 'tutor' circuits ( e.g., LMAN) should match plasticity mechanisms in 'student' circuits ( e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning.

  18. A two-stage design for multiple testing in large-scale association studies.

    PubMed

    Wen, Shu-Hui; Tzeng, Jung-Ying; Kao, Jau-Tsuen; Hsiao, Chuhsing Kate

    2006-01-01

    Modern association studies often involve a large number of markers and hence may encounter the problem of testing multiple hypotheses. Traditional procedures are usually over-conservative and with low power to detect mild genetic effects. From the design perspective, we propose a two-stage selection procedure to address this concern. Our main principle is to reduce the total number of tests by removing clearly unassociated markers in the first-stage test. Next, conditional on the findings of the first stage, which uses a less stringent nominal level, a more conservative test is conducted in the second stage using the augmented data and the data from the first stage. Previous studies have suggested using independent samples to avoid inflated errors. However, we found that, after accounting for the dependence between these two samples, the true discovery rate increases substantially. In addition, the cost of genotyping can be greatly reduced via this approach. Results from a study of hypertriglyceridemia and simulations suggest the two-stage method has a higher overall true positive rate (TPR) with a controlled overall false positive rate (FPR) when compared with single-stage approaches. We also report the analytical form of its overall FPR, which may be useful in guiding study design to achieve a high TPR while retaining the desired FPR.

  19. Landmark Estimation of Survival and Treatment Effect in a Randomized Clinical Trial

    PubMed Central

    Parast, Layla; Tian, Lu; Cai, Tianxi

    2013-01-01

    Summary In many studies with a survival outcome, it is often not feasible to fully observe the primary event of interest. This often leads to heavy censoring and thus, difficulty in efficiently estimating survival or comparing survival rates between two groups. In certain diseases, baseline covariates and the event time of non-fatal intermediate events may be associated with overall survival. In these settings, incorporating such additional information may lead to gains in efficiency in estimation of survival and testing for a difference in survival between two treatment groups. If gains in efficiency can be achieved, it may then be possible to decrease the sample size of patients required for a study to achieve a particular power level or decrease the duration of the study. Most existing methods for incorporating intermediate events and covariates to predict survival focus on estimation of relative risk parameters and/or the joint distribution of events under semiparametric models. However, in practice, these model assumptions may not hold and hence may lead to biased estimates of the marginal survival. In this paper, we propose a semi-nonparametric two-stage procedure to estimate and compare t-year survival rates by incorporating intermediate event information observed before some landmark time, which serves as a useful approach to overcome semi-competing risks issues. In a randomized clinical trial setting, we further improve efficiency through an additional calibration step. Simulation studies demonstrate substantial potential gains in efficiency in terms of estimation and power. We illustrate our proposed procedures using an AIDS Clinical Trial Protocol 175 dataset by estimating survival and examining the difference in survival between two treatment groups: zidovudine and zidovudine plus zalcitabine. PMID:24659838

  20. Assignment of the Stereochemistry and Anomeric Configuration of Sugars within Oligosaccharides Via Overlapping Disaccharide Ladders Using MSn

    NASA Astrophysics Data System (ADS)

    Konda, Chiharu; Londry, Frank A.; Bendiak, Brad; Xia, Yu

    2014-08-01

    A systematic approach is described that can pinpoint the stereo-structures (sugar identity, anomeric configuration, and location) of individual sugar units within linear oligosaccharides. Using a highly modified mass spectrometer, dissociation of linear oligosaccharides in the gas phase was optimized along multiple-stage tandem dissociation pathways (MSn, n = 4 or 5). The instrument was a hybrid triple quadrupole/linear ion trap mass spectrometer capable of high-efficiency bidirectional ion transfer between quadrupole arrays. Different types of collision-induced dissociation (CID), either on-resonance ion trap or beam-type CID could be utilized at any given stage of dissociation, enabling either glycosidic bond cleavages or cross-ring cleavages to be maximized when wanted. The approach first involves optimizing the isolation of disaccharide units as an ordered set of overlapping substructures via glycosidic bond cleavages during early stages of MSn, with explicit intent to minimize cross-ring cleavages. Subsequently, cross-ring cleavages were optimized for individual disaccharides to yield key diagnostic product ions ( m/ z 221). Finally, fingerprint patterns that establish stereochemistry and anomeric configuration were obtained from the diagnostic ions via CID. Model linear oligosaccharides were derivatized at the reducing end, allowing overlapping ladders of disaccharides to be isolated from MSn. High confidence stereo-structural determination was achieved by matching MSn CID of the diagnostic ions to synthetic standards via a spectral matching algorithm. Using this MSn ( n = 4 or 5) approach, the stereo-structures, anomeric configurations, and locations of three individual sugar units within two pentasaccharides were successfully determined.

  1. Model Based Optimization of Integrated Low Voltage DC-DC Converter for Energy Harvesting Applications

    NASA Astrophysics Data System (ADS)

    Jayaweera, H. M. P. C.; Muhtaroğlu, Ali

    2016-11-01

    A novel model based methodology is presented to determine optimal device parameters for the fully integrated ultra low voltage DC-DC converter for energy harvesting applications. The proposed model feasibly contributes to determine the maximum efficient number of charge pump stages to fulfill the voltage requirement of the energy harvester application. The proposed DC-DC converter based power consumption model enables the analytical derivation of the charge pump efficiency when utilized simultaneously with the known LC tank oscillator behavior under resonant conditions, and voltage step up characteristics of the cross-coupled charge pump topology. The verification of the model has been done using a circuit simulator. The optimized system through the established model achieves more than 40% maximum efficiency yielding 0.45 V output with single stage, 0.75 V output with two stages, and 0.9 V with three stages for 2.5 kΩ, 3.5 kΩ and 5 kΩ loads respectively using 0.2 V input.

  2. Influence Function Learning in Information Diffusion Networks

    PubMed Central

    Du, Nan; Liang, Yingyu; Balcan, Maria-Florina; Song, Le

    2015-01-01

    Can we learn the influence of a set of people in a social network from cascades of information diffusion? This question is often addressed by a two-stage approach: first learn a diffusion model, and then calculate the influence based on the learned model. Thus, the success of this approach relies heavily on the correctness of the diffusion model which is hard to verify for real world data. In this paper, we exploit the insight that the influence functions in many diffusion models are coverage functions, and propose a novel parameterization of such functions using a convex combination of random basis functions. Moreover, we propose an efficient maximum likelihood based algorithm to learn such functions directly from cascade data, and hence bypass the need to specify a particular diffusion model in advance. We provide both theoretical and empirical analysis for our approach, showing that the proposed approach can provably learn the influence function with low sample complexity, be robust to the unknown diffusion models, and significantly outperform existing approaches in both synthetic and real world data. PMID:25973445

  3. [Characteristics of dry matter production and nitrogen accumulation in barley genotypes with high nitrogen utilization efficiency].

    PubMed

    Huang, Yi; Li, Ting-Xuan; Zhang, Xi-Zhou; Ji, Lin

    2014-07-01

    A pot experiment was conducted under low (125 mg x kg-1) and normal (250 mg x kg(-1)) nitrogen treatments. The nitrogen uptake and utilization efficiency of 22 barley cultivars were investigated, and the characteristics of dry matter production and nitrogen accumulation in barley were analyzed. The results showed that nitrogen uptake and utilization efficiency were different for barley under two nitrogen levels. The maximal values of grain yield, nitrogen utilization efficiency for grain and nitrogen harvest index were 2.87, 2.91 and 2.47 times as those of the lowest under the low nitrogen treatment. Grain yield and nitrogen utilization efficiency for grain and nitrogen harvest index of barley genotype with high nitrogen utilization efficiency were significantly greater than low nitrogen utilization efficiency, and the parameters of high nitrogen utilization efficiency genotype were 82.1%, 61.5% and 50.5% higher than low nitrogen utilization efficiency genotype under the low nitrogen treatment. Dry matter mass and nitrogen utilization of high nitrogen utilization efficiency was significantly higher than those of low nitrogen utilization efficiency. A peak of dry matter mass of high nitrogen utilization efficiency occurred during jointing to heading stage, while that of nitrogen accumulation appeared before jointing. Under the low nitrogen treatment, dry matter mass of DH61 and DH121+ was 34.4% and 38.3%, and nitrogen accumulation was 54. 8% and 58.0% higher than DH80, respectively. Dry matter mass and nitrogen accumulation seriously affected yield before jointing stage, and the contribution rates were 47.9% and 54.7% respectively under the low nitrogen treatment. The effect of dry matter and nitrogen accumulation on nitrogen utilization efficiency for grain was the largest during heading to mature stages, followed by sowing to jointing stages, with the contribution rate being 29.5% and 48.7%, 29.0% and 15.8%, respectively. In conclusion, barley genotype with high nitrogen utilization efficiency had a strong ability of dry matter production and nitrogen accumulation. It could synergistically improve yield and nitrogen utilization efficiency by enhancing the ability of nitrogen uptake and dry matter formation before jointing stage in barley.

  4. Multi-Stage Open Peer Review: Scientific Evaluation Integrating the Strengths of Traditional Peer Review with the Virtues of Transparency and Self-Regulation

    PubMed Central

    Pöschl, Ulrich

    2012-01-01

    The traditional forms of scientific publishing and peer review do not live up to all demands of efficient communication and quality assurance in today’s highly diverse and rapidly evolving world of science. They need to be advanced and complemented by interactive and transparent forms of review, publication, and discussion that are open to the scientific community and to the public. The advantages of open access, public peer review, and interactive discussion can be efficiently and flexibly combined with the strengths of traditional scientific peer review. Since 2001 the benefits and viability of this approach are clearly demonstrated by the highly successful interactive open access journal Atmospheric Chemistry and Physics (ACP, www.atmos-chem-phys.net) and a growing number of sister journals launched and operated by the European Geosciences Union (EGU, www.egu.eu) and the open access publisher Copernicus (www.copernicus.org). The interactive open access journals are practicing an integrative multi-stage process of publication and peer review combined with interactive public discussion, which effectively resolves the dilemma between rapid scientific exchange and thorough quality assurance. Key features and achievements of this approach are: top quality and impact, efficient self-regulation and low rejection rates, high attractivity and rapid growth, low costs, and financial sustainability. In fact, ACP and the EGU interactive open access sister journals are by most if not all standards more successful than comparable scientific journals with traditional or alternative forms of peer review (editorial statistics, publication statistics, citation statistics, economic costs, and sustainability). The high efficiency and predictive validity of multi-stage open peer review have been confirmed in a series of dedicated studies by evaluation experts from the social sciences, and the same or similar concepts have recently also been adopted in other disciplines, including the life sciences and economics. Multi-stage open peer review can be flexibly adjusted to the needs and peculiarities of different scientific communities. Due to the flexibility and compatibility with traditional structures of scientific publishing and peer review, the multi-stage open peer review concept enables efficient evolution in scientific communication and quality assurance. It has the potential for swift replacement of hidden peer review as the standard of scientific quality assurance, and it provides a basis for open evaluation in science. PMID:22783183

  5. Wind turbine extraction from high spatial resolution remote sensing images based on saliency detection

    NASA Astrophysics Data System (ADS)

    Chen, Jingbo; Yue, Anzhi; Wang, Chengyi; Huang, Qingqing; Chen, Jiansheng; Meng, Yu; He, Dongxu

    2018-01-01

    The wind turbine is a device that converts the wind's kinetic energy into electrical power. Accurate and automatic extraction of wind turbine is instructive for government departments to plan wind power plant projects. A hybrid and practical framework based on saliency detection for wind turbine extraction, using Google Earth image at spatial resolution of 1 m, is proposed. It can be viewed as a two-phase procedure: coarsely detection and fine extraction. In the first stage, we introduced a frequency-tuned saliency detection approach for initially detecting the area of interest of the wind turbines. This method exploited features of color and luminance, was simple to implement, and was computationally efficient. Taking into account the complexity of remote sensing images, in the second stage, we proposed a fast method for fine-tuning results in frequency domain and then extracted wind turbines from these salient objects by removing the irrelevant salient areas according to the special properties of the wind turbines. Experiments demonstrated that our approach consistently obtains higher precision and better recall rates. Our method was also compared with other techniques from the literature and proves that it is more applicable and robust.

  6. On the role of environmental corruption in healthcare infrastructures: An empirical assessment for Italy using DEA with truncated regression approach.

    PubMed

    Cavalieri, Marina; Guccio, Calogero; Rizzo, Ilde

    2017-05-01

    This paper investigates empirically whether the institutional features of the contracting authority as well as the level of 'environmental' corruption in the area where the work is localised affect the efficient execution of public contracts for healthcare infrastructures. A two-stage Data Envelopment Analysis (DEA) is carried out based on a sample of Italian public contracts for healthcare infrastructures during the period 2000-2005. First, a smoothed bootstrapped DEA estimator is used to assess the relative efficiency in the implementation of each single infrastructure contract. Second, the determinants of the efficiency scores variability are considered, paying special attention to the effect exerted by 'environmental' corruption on different types of contracting authorities. Our results show that the performance of the contracts for healthcare infrastructures is significantly affected by 'environmental' corruption. Furthermore, healthcare contracting authorities are, on average, less efficient and the negative effect of corruption on efficiency is greater for this type of public procurers. The policy recommendation coming out of the study is to rely on 'qualified' contracting authorities since not all the public bodies have the necessary expertise to carry on public contracts for healthcare infrastructures efficiently. Copyright © 2017. Published by Elsevier B.V.

  7. A High-Density, High-Efficiency, Isolated On-Board Vehicle Battery Charger Utilizing Silicon Carbide Power Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Whitaker, B; Barkley, A; Cole, Z

    2014-05-01

    This paper presents an isolated on-board vehicular battery charger that utilizes silicon carbide (SiC) power devices to achieve high density and high efficiency for application in electric vehicles (EVs) and plug-in hybrid EVs (PHEVs). The proposed level 2 charger has a two-stage architecture where the first stage is a bridgeless boost ac-dc converter and the second stage is a phase-shifted full-bridge isolated dc-dc converter. The operation of both topologies is presented and the specific advantages gained through the use of SiC power devices are discussed. The design of power stage components, the packaging of the multichip power module, and themore » system-level packaging is presented with a primary focus on system density and a secondary focus on system efficiency. In this work, a hardware prototype is developed and a peak system efficiency of 95% is measured while operating both power stages with a switching frequency of 200 kHz. A maximum output power of 6.1 kW results in a volumetric power density of 5.0 kW/L and a gravimetric power density of 3.8 kW/kg when considering the volume and mass of the system including a case.« less

  8. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  9. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  10. Meta‐analysis using individual participant data: one‐stage and two‐stage approaches, and why they may differ

    PubMed Central

    Ensor, Joie; Riley, Richard D.

    2016-01-01

    Meta‐analysis using individual participant data (IPD) obtains and synthesises the raw, participant‐level data from a set of relevant studies. The IPD approach is becoming an increasingly popular tool as an alternative to traditional aggregate data meta‐analysis, especially as it avoids reliance on published results and provides an opportunity to investigate individual‐level interactions, such as treatment‐effect modifiers. There are two statistical approaches for conducting an IPD meta‐analysis: one‐stage and two‐stage. The one‐stage approach analyses the IPD from all studies simultaneously, for example, in a hierarchical regression model with random effects. The two‐stage approach derives aggregate data (such as effect estimates) in each study separately and then combines these in a traditional meta‐analysis model. There have been numerous comparisons of the one‐stage and two‐stage approaches via theoretical consideration, simulation and empirical examples, yet there remains confusion regarding when each approach should be adopted, and indeed why they may differ. In this tutorial paper, we outline the key statistical methods for one‐stage and two‐stage IPD meta‐analyses, and provide 10 key reasons why they may produce different summary results. We explain that most differences arise because of different modelling assumptions, rather than the choice of one‐stage or two‐stage itself. We illustrate the concepts with recently published IPD meta‐analyses, summarise key statistical software and provide recommendations for future IPD meta‐analyses. © 2016 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:27747915

  11. Municipal waste liquor treatment via bioelectrochemical and fermentation (H2 + CH4) processes: Assessment of various technological sequences.

    PubMed

    Rózsenberszki, Tamás; Koók, László; Bakonyi, Péter; Nemestóthy, Nándor; Logroño, Washington; Pérez, Mario; Urquizo, Gladys; Recalde, Celso; Kurdi, Róbert; Sarkady, Attila

    2017-03-01

    In this paper, the anaerobic treatment of a high organic-strength wastewater-type feedstock, referred as the liquid fraction of pressed municipal solid waste (LPW) was studied for energy recovery and organic matter removal. The processes investigated were (i) dark fermentation to produce biohydrogen, (ii) anaerobic digestion for biogas formation and (iii) microbial fuel cells for electrical energy generation. To find a feasible alternative for LPW treatment (meeting the two-fold aims given above), various one- as well as multi-stage processes were tested. The applications were evaluated based on their (i) COD removal efficiencies and (ii) specific energy gain. As a result, considering the former aspect, the single-stage processes could be ranked as: microbial fuel cell (92.4%)> anaerobic digestion (50.2%)> hydrogen fermentation (8.8%). From the latter standpoint, an order of hydrogen fermentation (2277 J g -1  COD removed  d -1 )> anaerobic digestion (205 J g -1  COD removed  d -1 )> microbial fuel cell (0.43 J g -1  COD removed  d -1 ) was attained. The assessment showed that combined, multi-step treatment was necessary to simultaneously achieve efficient organic matter removal and energy recovery from LPW. Therefore, a three-stage system (hydrogen fermentation-biomethanation-bioelectrochemical cell in sequence) was suggested. The different approaches were characterized via the estimation of COD balance, as well. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Lanthanide-alkali double sulfate precipitation from strong sulfuric acid NiMH battery waste leachate.

    PubMed

    Porvali, Antti; Wilson, Benjamin P; Lundström, Mari

    2018-01-01

    In NiMH battery leaching, rare earth element (REE) precipitation from sulfate media is often reported as being a result of increasing pH of the pregnant leach solution (PLS). Here we demonstrate that this precipitation is a phenomenon that depends on both Na + and SO 4 2- concentrations and not solely on pH. A two-stage leaching for industrially crushed NiMH waste is performed: The first stage consists of H 2 SO 4 leaching (2 M H 2 SO 4 , L/S = 10.4, V = 104 ml, T = 30 °C) and the second stage of H 2 O leaching (V = 100 ml, T = 25 °C). Moreover, precipitation experiments are separately performed as a function of added Na 2 SO 4 and H 2 SO 4 . During the precipitation, higher than stoichiometric quantities of Na to REE are utilized and this increase in both precipitation reagent concentrations results in an improved double sulfate precipitation efficiency. The best REE precipitation efficiencies (98-99%) - achieved by increasing concentrations of H 2 SO 4 and Na 2 SO 4 by 1.59 M and 0.35 M, respectively - results in a 21.8 times Na (as Na 2 SO 4 ) and 58.3 times SO 4 change in stoichiometric ratio to REE. Results strongly indicate a straightforward approach for REE recovery from NiMH battery waste without the need to increase the pH of PLS. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Theoretical and experimental study of a gas-coupled two-stage pulse tube cooler with stepped warm displacer as the phase shifter

    NASA Astrophysics Data System (ADS)

    Pang, Xiaomin; Wang, Xiaotao; Dai, Wei; Li, Haibing; Wu, Yinong; Luo, Ercang

    2018-06-01

    A compact and high efficiency cooler working at liquid hydrogen temperature has many important applications such as cooling superconductors and mid-infrared sensors. This paper presents a two-stage gas-coupled pulse tube cooler system with a completely co-axial configuration. A stepped warm displacer, working as the phase shifter for both stages, has been studied theoretically and experimentally in this paper. Comparisons with the traditional phase shifter (double inlet) are also made. Compared with the double inlet type, the stepped warm displacer has the advantages of recovering the expansion work from the pulse tube hot end (especially from the first stage) and easily realizing an appropriate phase relationship between the pressure wave and volume flow rate at the pulse tube hot end. Experiments are then carried out to investigate the performance. The pressure ratio at the compression space is maintained at 1.37, for the double inlet type, the system obtains 1.1 W cooling power at 20 K with 390 W acoustic power input and the relative Carnot efficiency is only 3.85%; while for the stepped warm displacer type, the system obtains 1.06 W cooling power at 20 K with only 224 W acoustic power input and the relative Carnot efficiency can reach 6.5%.

  14. Performance Evaluation of Staged Bosch Process for CO2 Reduction to Produce Life Support Consumables

    NASA Technical Reports Server (NTRS)

    Vilekar, Saurabh A.; Hawley, Kyle; Junaedi, Christian; Walsh, Dennis; Roychoudhury, Subir; Abney. Morgan B.; Mansell, James M.

    2012-01-01

    Utilizing carbon dioxide to produce water and hence oxygen is critical for sustained manned missions in space, and to support both NASA's cabin Atmosphere Revitalization System (ARS) and In-Situ Resource Utilization (ISRU) concepts. For long term missions beyond low Earth orbit, where resupply is significantly more difficult and costly, open loop ARS, like Sabatier, consume inputs such as hydrogen. The Bosch process, on the other hand, has the potential to achieve complete loop closure and is hence a preferred choice. However, current single stage Bosch reactor designs suffer from a large recycle penalty due to slow reaction rates and the inherent limitation in approaching thermodynamic equilibrium. Developmental efforts are seeking to improve upon the efficiency (hence reducing the recycle penalty) of current single stage Bosch reactors which employ traditional steel wool catalysts. Precision Combustion, Inc. (PCI), with support from NASA, has investigated the potential for utilizing catalysts supported over short-contact time Microlith substrates for the Bosch reaction to achieve faster reaction rates, higher conversions, and a reduced recycle flows. Proof-of-concept testing was accomplished for a staged Bosch process by splitting the chemistry in two separate reactors, first being the reverse water-gas-shift (RWGS) and the second being the carbon formation reactor (CFR) via hydrogenation and/or Boudouard. This paper presents the results from this feasibility study at various operating conditions. Additionally, results from two 70 hour durability tests for the RWGS reactor are discussed.

  15. Stability analysis of a two-stage tapered gyrotron traveling-wave tube amplifier with distributed losses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hung, C. L.; Lian, Y. H.; Cheng, N. H.

    2012-11-15

    The two-stage tapered gyrotron traveling-wave tube (gyro-TWT) amplifier has achieved wide bandwidth in the millimeter wave range. However, possible oscillations in each stage limit this amplifier's operating beam current and thus its output power. To further enhance the amplifier's stability, distributed losses are applied to the interaction circuit of the two-stage tapered gyro-TWT. A self-consistent particle-tracing code is used for analyzing the beam-wave interactions. The stability analysis includes the effects of the wall losses and the length of each stage on the possible oscillations. Simulation results reveal that the distributed-loss method effectively stabilizes all the oscillations in the two stages.more » Under stable operating conditions, the device is predicted to produce a peak power of 60 kW with an efficiency of 29% and a saturated gain of 52 dB in the Ka-band. The 3-dB bandwidth is 5.7 GHz, which is approximately 16% of the center frequency.« less

  16. Robust wafer identification recognition based on asterisk-shape filter and high-low score comparison method.

    PubMed

    Hsu, Wei-Chih; Yu, Tsan-Ying; Chen, Kuan-Liang

    2009-12-10

    Wafer identifications (wafer ID) can be used to identify wafers from each other so that wafer processing can be traced easily. Wafer ID recognition is one of the problems of optical character recognition. The process to recognize wafer IDs is similar to that used in recognizing car license-plate characters. However, due to some unique characteristics, such as the irregular space between two characters and the unsuccessive strokes of wafer ID, it will not get a good result to recognize wafer ID by directly utilizing the approaches used in car license-plate character recognition. Wafer ID scratches are engraved by a laser scribe almost along the following four fixed directions: horizontal, vertical, plus 45 degrees , and minus 45 degrees orientations. The closer to the center line of a wafer ID scratch, the higher the gray level will be. These and other characteristics increase the difficulty to recognize the wafer ID. In this paper a wafer ID recognition scheme based on an asterisk-shape filter and a high-low score comparison method is proposed to cope with the serious influence of uneven luminance and make recognition more efficiently. Our proposed approach consists of some processing stages. Especially in the final recognition stage, a template-matching method combined with stroke analysis is used as a recognizing scheme. This is because wafer IDs are composed of Semiconductor Equipment and Materials International (SEMI) standard Arabic numbers and English alphabets, and thus the template ID images are easy to obtain. Furthermore, compared with the approach that requires prior training, such as a support vector machine, which often needs a large amount of training image samples, no prior training is required for our approach. The testing results show that our proposed scheme can efficiently and correctly segment out and recognize the wafer ID with high performance.

  17. Aerodynamic Design Study of Advanced Multistage Axial Compressor

    NASA Technical Reports Server (NTRS)

    Larosiliere, Louis M.; Wood, Jerry R.; Hathaway, Michael D.; Medd, Adam J.; Dang, Thong Q.

    2002-01-01

    As a direct response to the need for further performance gains from current multistage axial compressors, an investigation of advanced aerodynamic design concepts that will lead to compact, high-efficiency, and wide-operability configurations is being pursued. Part I of this report describes the projected level of technical advancement relative to the state of the art and quantifies it in terms of basic aerodynamic technology elements of current design systems. A rational enhancement of these elements is shown to lead to a substantial expansion of the design and operability space. Aerodynamic design considerations for a four-stage core compressor intended to serve as a vehicle to develop, integrate, and demonstrate aerotechnology advancements are discussed. This design is biased toward high efficiency at high loading. Three-dimensional blading and spanwise tailoring of vector diagrams guided by computational fluid dynamics (CFD) are used to manage the aerodynamics of the high-loaded endwall regions. Certain deleterious flow features, such as leakage-vortex-dominated endwall flow and strong shock-boundary-layer interactions, were identified and targeted for improvement. However, the preliminary results were encouraging and the front two stages were extracted for further aerodynamic trimming using a three-dimensional inverse design method described in part II of this report. The benefits of the inverse design method are illustrated by developing an appropriate pressure-loading strategy for transonic blading and applying it to reblade the rotors in the front two stages of the four-stage configuration. Multistage CFD simulations based on the average passage formulation indicated an overall efficiency potential far exceeding current practice for the front two stages. Results of the CFD simulation at the aerodynamic design point are interrogated to identify areas requiring additional development. In spite of the significantly higher aerodynamic loadings, advanced CFD-based tools were able to effectively guide the design of a very efficient axial compressor under state-of-the-art aeromechanical constraints.

  18. Optical Flow in a Smart Sensor Based on Hybrid Analog-Digital Architecture

    PubMed Central

    Guzmán, Pablo; Díaz, Javier; Agís, Rodrigo; Ros, Eduardo

    2010-01-01

    The purpose of this study is to develop a motion sensor (delivering optical flow estimations) using a platform that includes the sensor itself, focal plane processing resources, and co-processing resources on a general purpose embedded processor. All this is implemented on a single device as a SoC (System-on-a-Chip). Optical flow is the 2-D projection into the camera plane of the 3-D motion information presented at the world scenario. This motion representation is widespread well-known and applied in the science community to solve a wide variety of problems. Most applications based on motion estimation require work in real-time; hence, this restriction must be taken into account. In this paper, we show an efficient approach to estimate the motion velocity vectors with an architecture based on a focal plane processor combined on-chip with a 32 bits NIOS II processor. Our approach relies on the simplification of the original optical flow model and its efficient implementation in a platform that combines an analog (focal-plane) and digital (NIOS II) processor. The system is fully functional and is organized in different stages where the early processing (focal plane) stage is mainly focus to pre-process the input image stream to reduce the computational cost in the post-processing (NIOS II) stage. We present the employed co-design techniques and analyze this novel architecture. We evaluate the system’s performance and accuracy with respect to the different proposed approaches described in the literature. We also discuss the advantages of the proposed approach as well as the degree of efficiency which can be obtained from the focal plane processing capabilities of the system. The final outcome is a low cost smart sensor for optical flow computation with real-time performance and reduced power consumption that can be used for very diverse application domains. PMID:22319283

  19. Two stage algorithm vs commonly used approaches for the suspect screening of complex environmental samples analyzed via liquid chromatography high resolution time of flight mass spectroscopy: A test study.

    PubMed

    Samanipour, Saer; Baz-Lomba, Jose A; Alygizakis, Nikiforos A; Reid, Malcolm J; Thomaidis, Nikolaos S; Thomas, Kevin V

    2017-06-09

    LC-HR-QTOF-MS recently has become a commonly used approach for the analysis of complex samples. However, identification of small organic molecules in complex samples with the highest level of confidence is a challenging task. Here we report on the implementation of a two stage algorithm for LC-HR-QTOF-MS datasets. We compared the performances of the two stage algorithm, implemented via NIVA_MZ_Analyzer™, with two commonly used approaches (i.e. feature detection and XIC peak picking, implemented via UNIFI by Waters and TASQ by Bruker, respectively) for the suspect analysis of four influent wastewater samples. We first evaluated the cross platform compatibility of LC-HR-QTOF-MS datasets generated via instruments from two different manufacturers (i.e. Waters and Bruker). Our data showed that with an appropriate spectral weighting function the spectra recorded by the two tested instruments are comparable for our analytes. As a consequence, we were able to perform full spectral comparison between the data generated via the two studied instruments. Four extracts of wastewater influent were analyzed for 89 analytes, thus 356 detection cases. The analytes were divided into 158 detection cases of artificial suspect analytes (i.e. verified by target analysis) and 198 true suspects. The two stage algorithm resulted in a zero rate of false positive detection, based on the artificial suspect analytes while producing a rate of false negative detection of 0.12. For the conventional approaches, the rates of false positive detection varied between 0.06 for UNIFI and 0.15 for TASQ. The rates of false negative detection for these methods ranged between 0.07 for TASQ and 0.09 for UNIFI. The effect of background signal complexity on the two stage algorithm was evaluated through the generation of a synthetic signal. We further discuss the boundaries of applicability of the two stage algorithm. The importance of background knowledge and experience in evaluating the reliability of results during the suspect screening was evaluated. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations.

    PubMed

    Wang, Jiaxi; Gronalt, Manfred; Sun, Yan

    2017-01-01

    Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers.

  1. A two-stage approach to the depot shunting driver assignment problem with workload balance considerations

    PubMed Central

    Gronalt, Manfred; Sun, Yan

    2017-01-01

    Due to its environmentally sustainable and energy-saving characteristics, railway transportation nowadays plays a fundamental role in delivering passengers and goods. Emerged in the area of transportation planning, the crew (workforce) sizing problem and the crew scheduling problem have been attached great importance by the railway industry and the scientific community. In this paper, we aim to solve the two problems by proposing a novel two-stage optimization approach in the context of the electric multiple units (EMU) depot shunting driver assignment problem. Given a predefined depot shunting schedule, the first stage of the approach focuses on determining an optimal size of shunting drivers. While the second stage is formulated as a bi-objective optimization model, in which we comprehensively consider the objectives of minimizing the total walking distance and maximizing the workload balance. Then we combine the normalized normal constraint method with a modified Pareto filter algorithm to obtain Pareto solutions for the bi-objective optimization problem. Furthermore, we conduct a series of numerical experiments to demonstrate the proposed approach. Based on the computational results, the regression analysis yield a driver size predictor and the sensitivity analysis give some interesting insights that are useful for decision makers. PMID:28704489

  2. A Two-Stage Approach to Missing Data: Theory and Application to Auxiliary Variables

    ERIC Educational Resources Information Center

    Savalei, Victoria; Bentler, Peter M.

    2009-01-01

    A well-known ad-hoc approach to conducting structural equation modeling with missing data is to obtain a saturated maximum likelihood (ML) estimate of the population covariance matrix and then to use this estimate in the complete data ML fitting function to obtain parameter estimates. This 2-stage (TS) approach is appealing because it minimizes a…

  3. Stages as models of scene geometry.

    PubMed

    Nedović, Vladimir; Smeulders, Arnold W M; Redert, André; Geusebroek, Jan-Mark

    2010-09-01

    Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations.

  4. Core compressor exit stage study. 1: Aerodynamic and mechanical design

    NASA Technical Reports Server (NTRS)

    Burdsall, E. A.; Canal, E., Jr.; Lyons, K. A.

    1979-01-01

    The effect of aspect ratio on the performance of core compressor exit stages was demonstrated using two three stage, highly loaded, core compressors. Aspect ratio was identified as having a strong influence on compressors endwall loss. Both compressors simulated the last three stages of an advanced eight stage core compressor and were designed with the same 0.915 hub/tip ratio, 4.30 kg/sec (9.47 1bm/sec) inlet corrected flow, and 167 m/sec (547 ft/sec) corrected mean wheel speed. The first compressor had an aspect ratio of 0.81 and an overall pressure ratio of 1.357 at a design adiabatic efficiency of 88.3% with an average diffusion factor or 0.529. The aspect ratio of the second compressor was 1.22 with an overall pressure ratio of 1.324 at a design adiabatic efficiency of 88.7% with an average diffusion factor of 0.491.

  5. Establishment of an efficient virus-induced gene silencing (VIGS) assay in Arabidopsis by Agrobacterium-mediated rubbing infection.

    PubMed

    Manhães, Ana Marcia E de A; de Oliveira, Marcos V V; Shan, Libo

    2015-01-01

    Several VIGS protocols have been established for high-throughput functional genomic screens as it bypasses the time-consuming and laborious process of generation of transgenic plants. The silencing efficiency in this approach is largely hindered by a technically demanding step in which the first pair of newly emerged true leaves at the 2-week-old stage are infiltrated with a needleless syringe. To further optimize VIGS efficiency and achieve rapid inoculation for a large-scale functional genomic study, here we describe a protocol of an efficient VIGS assay in Arabidopsis using Agrobacterium-mediated rubbing infection. The Agrobacterium inoculation is performed by simply rubbing the leaves with Filter Agent Celite(®) 545. The highly efficient and uniform silencing effect was indicated by the development of a visibly albino phenotype due to silencing of the Cloroplastos alterados 1 (CLA1) gene in the newly emerged leaves. In addition, the albino phenotype could be observed in stems and flowers, indicating its potential application for gene functional studies in the late vegetative development and flowering stages.

  6. Comparative Study of Survival following Videothoracoscopic Lobectomy Procedures for Lung Cancer: Single- versus Multiple-port Approaches.

    PubMed

    Borro, José M; Regueiro, Francisco; Pértega, Sonia; Constenla, Manuel; Pita, Salvador

    2017-04-01

    Video-assisted thoracoscopic surgery has become the technique of choice in the early stages of lung cancer in many centers although there is no evidence that all of the surgical approaches achieve the same long-term survival. We carried out a retrospective review of 276 VATS lobectomies performed in our department, analyzing age, sex, comorbidities, current smoker, FEV1 and FCV, surgical approach, TNM and pathological stage, histologic type, neoadjuvant or coadjuvant chemotherapy, relapse and metastasis time, with the main aim of evaluating the survival rate and disease-free time, especially with regard to the two/three versus single port approach. The one/four year global survival rate was 88.1 and 67.6% respectively. Bivariate analysis found that the variables associated with survival are comorbidity, histological type, stage, surgical approach and need for chemotherapy. When we independently analyzed the surgical approach, we found a lower survival rate in the single-port group vs. the two/three-port group (VATS). Stratifying by tumoral stage (stage I) and by tumor size (T2) survival was significantly lower for patients with single-port group in comparison to VATS approach. In the multivariate analysis, single-port group is associated with a higher risk of death (HR=1.78). In analyzing disease-free survival, differences were found in both cases in favor of two/three port VATS: p=.093 for local relapses and p=.091 for the development of metastasis. These results challenge the use of the single port technique in malignant lung pathologies, suggesting the need for clinical trials in order to identify the role this technique may have in lung cancer surgery. Copyright © 2016 SEPAR. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Efficient generation of discontinuity-preserving adaptive triangulations from range images.

    PubMed

    Garcia, Miguel Angel; Sappa, Angel Domingo

    2004-10-01

    This paper presents an efficient technique for generating adaptive triangular meshes from range images. The algorithm consists of two stages. First, a user-defined number of points is adaptively sampled from the given range image. Those points are chosen by taking into account the surface shapes represented in the range image in such a way that points tend to group in areas of high curvature and to disperse in low-variation regions. This selection process is done through a noniterative, inherently parallel algorithm in order to gain efficiency. Once the image has been subsampled, the second stage applies a two and one half-dimensional Delaunay triangulation to obtain an initial triangular mesh. To favor the preservation of surface and orientation discontinuities (jump and crease edges) present in the original range image, the aforementioned triangular mesh is iteratively modified by applying an efficient edge flipping technique. Results with real range images show accurate triangular approximations of the given range images with low processing times.

  8. Automated recognition system for power quality disturbances

    NASA Astrophysics Data System (ADS)

    Abdelgalil, Tarek

    The application of deregulation policies in electric power systems has resulted in the necessity to quantify the quality of electric power. This fact highlights the need for a new monitoring strategy which is capable of tracking, detecting, classifying power quality disturbances, and then identifying the source of the disturbance. The objective of this work is to design an efficient and reliable power quality monitoring strategy that uses the advances in signal processing and pattern recognition to overcome the deficiencies that exist in power quality monitoring devices. The purposed monitoring strategy has two stages. The first stage is to detect, track, and classify any power quality violation by the use of on-line measurements. In the second stage, the source of the classified power quality disturbance must be identified. In the first stage, an adaptive linear combiner is used to detect power quality disturbances. Then, the Teager Energy Operator and Hilbert Transform are utilized for power quality event tracking. After the Fourier, Wavelet, and Walsh Transforms are employed for the feature extraction, two approaches are then exploited to classify the different power quality disturbances. The first approach depends on comparing the disturbance to be classified with a stored set of signatures for different power quality disturbances. The comparison is developed by using Hidden Markov Models and Dynamic Time Warping. The second approach depends on employing an inductive inference to generate the classification rules directly from the data. In the second stage of the new monitoring strategy, only the problem of identifying the location of the switched capacitor which initiates the transients is investigated. The Total Least Square-Estimation of Signal Parameters via Rotational Invariance Technique is adopted to estimate the amplitudes and frequencies of the various modes contained in the voltage signal measured at the facility entrance. After extracting the amplitudes and frequencies, an Artificial Neural Network is employed to identify the switched capacitor by using amplitudes and frequencies extracted from the transient signal. The new algorithms for detecting, tracking, and classifying power quality disturbances demonstrate the potential for further development of a fully automated recognition system for the assessment of power quality. This is possible because the implementation of the proposed algorithms for the power quality monitoring device becomes a straight forward process by modifying the device software.

  9. Performance of Single-Stage Turbine of Mark 25 Torpedo Power Plant with Two Special Nozzles. II; Efficiency with 20 Degrees-Inlet-Angle Rotor Blades

    NASA Technical Reports Server (NTRS)

    Schum, Harold J.; Whitney, Warren J.

    1949-01-01

    A single-stage modification of the turbine from a Mark 25 torpedo power plant was investigated to determine the performance with two nozzle designs in combination with special rotor blades having a 20 inlet angle. The performance is presented in terms of blade, rotor, and brake efficiency as a function of blade-jet speed ratio for pressure ratios of 8, 15 (design), and 20. The blade efficiency with the nozzle having circular pas- sages (K) was equal to or higher than that with the nozzle having rectangular passages (J) for all pressure ratios and speeds investigated. The maximum blade efficiency of 0.571 was obtained with nozzle K at a pressure ratio of 8 and a blade-jet speed ratio of 0.296. The difference in blade efficiency was negligible at a pressure ratio of 8 at the low speeds; the maxim difference was 0.040 at a pressure ratio of 20 and a blade-jet speed ratio of 0.260.

  10. Evaluating the effectiveness of restoring longitudinal connectivity for stream fish communities: towards a more holistic approach.

    PubMed

    Tummers, Jeroen S; Hudson, Steve; Lucas, Martyn C

    2016-11-01

    A more holistic approach towards testing longitudinal connectivity restoration is needed in order to establish that intended ecological functions of such restoration are achieved. We illustrate the use of a multi-method scheme to evaluate the effectiveness of 'nature-like' connectivity restoration for stream fish communities in the River Deerness, NE England. Electric-fishing, capture-mark-recapture, PIT telemetry and radio-telemetry were used to measure fish community composition, dispersal, fishway efficiency and upstream migration respectively. For measuring passage and dispersal, our rationale was to evaluate a wide size range of strong swimmers (exemplified by brown trout Salmo trutta) and weak swimmers (exemplified by bullhead Cottus perifretum) in situ in the stream ecosystem. Radio-tracking of adult trout during the spawning migration showed that passage efficiency at each of five connectivity-restored sites was 81.3-100%. Unaltered (experimental control) structures on the migration route had a bottle-neck effect on upstream migration, especially during low flows. However, even during low flows, displaced PIT tagged juvenile trout (total n=153) exhibited a passage efficiency of 70.1-93.1% at two nature-like passes. In mark-recapture experiments juvenile brown trout and bullhead tagged (total n=5303) succeeded in dispersing upstream more often at most structures following obstacle modification, but not at the two control sites, based on a Laplace kernel modelling approach of observed dispersal distance and barrier traverses. Medium-term post-restoration data (2-3years) showed that the fish assemblage remained similar at five of six connectivity-restored sites and two control sites, but at one connectivity-restored headwater site previously inhabited by trout only, three native non-salmonid species colonized. We conclude that stream habitat reconnection should support free movement of a wide range of species and life stages, wherever retention of such obstacles is not needed to manage non-native invasive species. Evaluation of the effectiveness of fish community restoration in degraded streams benefits from a similarly holistic approach. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Assessing Sustainability Curriculum: From Transmissive to Transformative Approaches

    ERIC Educational Resources Information Center

    Gaard, Greta C.; Blades, Jarod; Wright, Mary

    2017-01-01

    Purpose: This paper aims to describe a two-stage sustainability curriculum assessment, providing tools and strategies for other faculty to use in implementing their own sustainability assessments. Design/methodology/approach: In the first stage of the five-year curriculum assessment, the authors used an anonymous survey of sustainability faculty…

  12. Dose finding with the sequential parallel comparison design.

    PubMed

    Wang, Jessie J; Ivanova, Anastasia

    2014-01-01

    The sequential parallel comparison design (SPCD) is a two-stage design recommended for trials with possibly high placebo response. A drug-placebo comparison in the first stage is followed in the second stage by placebo nonresponders being re-randomized between drug and placebo. We describe how SPCD can be used in trials where multiple doses of a drug or multiple treatments are compared with placebo and present two adaptive approaches. We detail how to analyze data in such trials and give recommendations about the allocation proportion to placebo in the two stages of SPCD.

  13. Continuous hydrogen and methane production from Agave tequilana bagasse hydrolysate by sequential process to maximize energy recovery efficiency.

    PubMed

    Montiel Corona, Virginia; Razo-Flores, Elías

    2018-02-01

    Continuous H 2 and CH 4 production in a two-stage process to increase energy recovery from agave bagasse enzymatic-hydrolysate was studied. In the first stage, the effect of organic loading rate (OLR) and stirring speed on volumetric hydrogen production rate (VHPR) was evaluated in a continuous stirred tank reactor (CSTR); by controlling the homoacetogenesis with the agitation speed and maintaining an OLR of 44 g COD/L-d, it was possible to reach a VHPR of 6 L H 2 /L-d, equivalent to 1.34 kJ/g bagasse. In the second stage, the effluent from CSTR was used as substrate to feed a UASB reactor for CH 4 production. Volumetric methane production rate (VMPR) of 6.4 L CH 4 /L-d was achieved with a high OLR (20 g COD/L-d) and short hydraulic retention time (HRT, 14 h), producing 225 mL CH 4 /g-bagasse equivalent to 7.88 kJ/g bagasse. The two-stage continuous process significantly increased energy conversion efficiency (56%) compared to one-stage hydrogen production (8.2%). Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Design analysis of a Helium re-condenser

    NASA Astrophysics Data System (ADS)

    Muley, P. K.; Bapat, S. L.; Atrey, M. D.

    2017-02-01

    Modern helium cryostats deploy a cryocooler with a re-condenser at its II stage for in-situ re-condensation of boil-off vapor. The present work is a vital step in the ongoing research work of design of cryocooler based 100 litre helium cryostat with in-situ re-condensation. The cryostat incorporates a two stage Gifford McMahon cryocooler having specified refrigerating capacity of 40 W at 43 K for I stage and 1 W at 4.2 K for II stage. Although design of cryostat ensures thermal load for cryocooler below its specified refrigerating capacity at the second stage, successful in-situ re-condensation depends on proper design of re-condenser which forms the objective of this work. The present work proposes design of helium re-condenser with straight rectangular fins. Fins are analyzed for optimization of thermal performance parameters such as condensation heat transfer coefficient, surface area for heat transfer, re-condensing capacity, efficiency and effectiveness. The present work provides design of re-condenser with 19 integral fins each of 10 mm height and 1.5 mm thickness with a gap of 1.5 mm between two fins, keeping in mind the manufacturing feasibility, having efficiency of 80.96 % and effectiveness of 10.34.

  15. Advanced Thermal Conversion Systems

    DTIC Science & Technology

    2015-03-18

    increase electron emission from the cathode. A two-stage, PETE topping stage followed by a thermoelectric bottoming stage, is projected to have a...illustrated in the by the energy-band diagrams in Fig. 1. In that aspect, PETE converters are similar to photovoltaic (PV) cells, but unlike PV cells, PETE... photovoltaic cells at 3000x concentration (~38%). As shown in Fig. 2(b), the highest conversion efficiencies are obtained by using photo-cathodes

  16. Reflexive Learning: Stages towards Wisdom with Dreyfus

    ERIC Educational Resources Information Center

    McPherson, Ian

    2005-01-01

    The Dreyfus (2001) account of seven stages of learning is considered in the context of the Dreyfus (1980s) account of five stages of skill development. The two new stages, Mastery and Practical Wisdom, make more explicit certain themes implicit in the five-stage account. In this way Dreyfus (2001) encourages a more reflexive approach. The themes…

  17. A Simple and Computationally Efficient Approach to Multifactor Dimensionality Reduction Analysis of Gene-Gene Interactions for Quantitative Traits

    PubMed Central

    Gui, Jiang; Moore, Jason H.; Williams, Scott M.; Andrews, Peter; Hillege, Hans L.; van der Harst, Pim; Navis, Gerjan; Van Gilst, Wiek H.; Asselbergs, Folkert W.; Gilbert-Diamond, Diane

    2013-01-01

    We present an extension of the two-class multifactor dimensionality reduction (MDR) algorithm that enables detection and characterization of epistatic SNP-SNP interactions in the context of a quantitative trait. The proposed Quantitative MDR (QMDR) method handles continuous data by modifying MDR’s constructive induction algorithm to use a T-test. QMDR replaces the balanced accuracy metric with a T-test statistic as the score to determine the best interaction model. We used a simulation to identify the empirical distribution of QMDR’s testing score. We then applied QMDR to genetic data from the ongoing prospective Prevention of Renal and Vascular End-Stage Disease (PREVEND) study. PMID:23805232

  18. An improved Huffman coding with encryption for Radio Data System (RDS) for smart transportation

    NASA Astrophysics Data System (ADS)

    Wu, C. H.; Tseng, Kuo-Kun; Ng, C. K.; Ho, G. T. S.; Zeng, Fu-Fu; Tse, Y. K.

    2018-02-01

    As the development of Radio Data System (RDS) technology and its applications are getting more and more attention and promotion, people concern their personal privacy and communication efficiency, and therefore compression and encryption technologies are being more important for transferring RDS data. Unlike most of the current approaches which contain two stages, compression and encryption, we proposed a new algorithm called Swapped Huffman Table (SHT) based on Huffman algorithm to realise compression and encryption in a single process. In this paper, a good performance for both compression and encryption is obtained and a possible application of RDS with the proposed algorithm in smart transportation is illustrated.

  19. One-Pot Synthesis of N-Substituted β-Amino Alcohols from Aldehydes and Isocyanides.

    PubMed

    Cioc, Răzvan C; van der Niet, Daan J H; Janssen, Elwin; Ruijter, Eelco; Orru, Romano V A

    2015-05-18

    A practical two-stage one-pot synthesis of N-substituted β-amino alcohols using aldehydes and isocyanides as starting materials has been developed. This method features mild reaction conditions, broad scope, and general tolerance of functional groups. Based on a less common central carbon-carbon bond disconnection, this protocol complements traditional approaches that involve amines and various carbon electrophiles (epoxides, α-halo ketones, β-halohydrins). Medicinally relevant products can be prepared in a concise and efficient way from simple building blocks, as demonstrated in the synthesis of the antiasthma drug salbutamol. Upgrading the synthesis to an enantioselective variant is also feasible. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Optimal auxiliary-covariate-based two-phase sampling design for semiparametric efficient estimation of a mean or mean difference, with application to clinical trials.

    PubMed

    Gilbert, Peter B; Yu, Xuesong; Rotnitzky, Andrea

    2014-03-15

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semiparametric efficient estimator is applied. This approach is made efficient by specifying the phase two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. We perform simulations to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. We provide proofs and R code. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean 'importance-weighted' breadth (Y) of the T-cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24 % in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y | W] is important for realizing the efficiency gain, which is aided by an ample phase two sample and by using a robust fitting method. Copyright © 2013 John Wiley & Sons, Ltd.

  1. Optimal Auxiliary-Covariate Based Two-Phase Sampling Design for Semiparametric Efficient Estimation of a Mean or Mean Difference, with Application to Clinical Trials

    PubMed Central

    Gilbert, Peter B.; Yu, Xuesong; Rotnitzky, Andrea

    2014-01-01

    To address the objective in a clinical trial to estimate the mean or mean difference of an expensive endpoint Y, one approach employs a two-phase sampling design, wherein inexpensive auxiliary variables W predictive of Y are measured in everyone, Y is measured in a random sample, and the semi-parametric efficient estimator is applied. This approach is made efficient by specifying the phase-two selection probabilities as optimal functions of the auxiliary variables and measurement costs. While this approach is familiar to survey samplers, it apparently has seldom been used in clinical trials, and several novel results practicable for clinical trials are developed. Simulations are performed to identify settings where the optimal approach significantly improves efficiency compared to approaches in current practice. Proofs and R code are provided. The optimality results are developed to design an HIV vaccine trial, with objective to compare the mean “importance-weighted” breadth (Y) of the T cell response between randomized vaccine groups. The trial collects an auxiliary response (W) highly predictive of Y, and measures Y in the optimal subset. We show that the optimal design-estimation approach can confer anywhere between absent and large efficiency gain (up to 24% in the examples) compared to the approach with the same efficient estimator but simple random sampling, where greater variability in the cost-standardized conditional variance of Y given W yields greater efficiency gains. Accurate estimation of E[Y∣W] is important for realizing the efficiency gain, which is aided by an ample phase-two sample and by using a robust fitting method. PMID:24123289

  2. Distinguishing crystallization stages and their influence on quantum efficiency during perovskite solar cell formation in real-time.

    PubMed

    Wagner, Lukas; Mundt, Laura E; Mathiazhagan, Gayathri; Mundus, Markus; Schubert, Martin C; Mastroianni, Simone; Würfel, Uli; Hinsch, Andreas; Glunz, Stefan W

    2017-11-02

    Relating crystallization of the absorber layer in a perovskite solar cell (PSC) to the device performance is a key challenge for the process development and in-depth understanding of these types of high efficient solar cells. A novel approach that enables real-time photo-physical and electrical characterization using a graphite-based PSC is introduced in this work. In our graphite-based PSC, the device architecture of porous monolithic contact layers creates the possibility to perform photovoltaic measurements while the perovskite crystallizes within this scaffold. The kinetics of crystallization in a solution based 2-step formation process has been analyzed by real-time measurement of the external photon to electron quantum efficiency as well as the photoluminescence emission spectra of the solar cell. With this method it was in particular possible to identify a previously overlooked crystallization stage during the formation of the perovskite absorber layer. This stage has significant influence on the development of the photocurrent, which is attributed to the formation of electrical pathways between the electron and hole contact, enabling efficient charge carrier extraction. We observe that in contrast to previously suggested models, the perovskite layer formation is indeed not complete with the end of crystal growth.

  3. A Three-Step Approach to Veterinary Medical Education

    ERIC Educational Resources Information Center

    Kavanaugh, J. F.

    1976-01-01

    A formal education plan with two admission steps is outlined. Animal agriculture and the basic sciences are combined in a two-year middle stage. The medical education (third stage) that specifically addresses pathology and the clinical sciences encompasses three years. (Author/LBH)

  4. Energy efficient engine high-pressure turbine detailed design report

    NASA Technical Reports Server (NTRS)

    Thulin, R. D.; Howe, D. C.; Singer, I. D.

    1982-01-01

    The energy efficient engine high-pressure turbine is a single stage system based on technology advancements in the areas of aerodynamics, structures and materials to achieve high performance, low operating economics and durability commensurate with commercial service requirements. Low loss performance features combined with a low through-flow velocity approach results in a predicted efficiency of 88.8 for a flight propulsion system. Turbine airfoil durability goals are achieved through the use of advanced high-strength and high-temperature capability single crystal materials and effective cooling management. Overall, this design reflects a considerable extension in turbine technology that is applicable to future, energy efficient gas-turbine engines.

  5. [Decompensated valvular disease and coarctation. One-stage repair using a median approach with an ascending aorta-abdominal aorta shunt].

    PubMed

    Baille, Y; Sigwalt, M; Vaillant, A; Sicard Desnuelle, M P; Varnet, B

    1981-11-01

    The tactical decision in patients with decompensated valvular disease associated with a severe stenosis of the aortic isthmus is always difficult. One stage surgical repair using two separate approaches is a long and high risk procedure. It would seem more logical and safer to treat the lesions in two stages a few weeks apart, the severest lesion being managed first. In the two cases reported. The isthmic stenoses and valvular lesions were of the same severity and made both classical techniques impracticable. Therefore the patients underwent a single stage procedure by a median approach associating valve replacement under cardiopulmonary bypass (mitral and tricuspid in one and aortic in the other case) and an ascending aorta-abdominal aorta dacron conduit. The present postoperative survival periods are 30 and 9 months. The functional result was good (Class 1 and 0) and postoperative angiography has shown the montage to be working satisfactorily. This technique is exceptional but may be useful in borderline cases with decompensated valvular disease and severe isthmic stenosis.

  6. The 15 kW sub e (nominal) solar thermal electric power conversion concept definition study: Steam Rankine turbine system

    NASA Technical Reports Server (NTRS)

    Bland, T. J.

    1979-01-01

    A study to define the performance and cost characteristics of a solar powered, steam Rankine turbine system located at the focal point of a solar concentrator is presented. A two stage re-entry turbine with reheat between stages, which has an efficiency of 27% at a turbine inlet temperature of 732 C was used. System efficiency was defined as 60 Hertz electrical output divided by absorbed thermal input in the working fluid. Mass production costs were found to be approximately 364 dollars/KW.

  7. An adaptive two-stage dose-response design method for establishing proof of concept.

    PubMed

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  8. Increasing efficiency in production of cloned piglets.

    PubMed

    Callesen, Henrik; Liu, Ying; Pedersen, Hanne S; Li, Rong; Schmidt, Mette

    2014-12-01

    The low efficiency in obtaining piglets after production of cloned embryos was challenged in two steps-first by performing in vitro culture for 5-6 days after cloning to obtain later-stage embryos for more precise selection for transfer, and second by reducing the number of embryos transferred per recipient sow. The data set consisted of combined results from a 4-year period where cloning was performed to produce piglets that were transgenic for important human diseases. For this, different transgenes and cell types were used, and the cloning work was performed by several persons using oocytes from different pig breeds, but following a standardized and optimized protocol. Results showed that in vitro culture is possible with a relatively stable rate of transferable embryos around 41% and a pregnancy rate around 90%. Furthermore, a reduction from around 80 embryos to 40 embryos transferred per recipient was possible without changing the efficiency of around 14% (piglets born out of embryos transferred). It was concluded that this approach can increase the efficiency in obtaining piglets by means of in vitro culture and selection of high-quality embryos with subsequent transfer into more recipients. Such changes can also reduce the need for personnel, time, and material when working with this technology.

  9. Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure

    NASA Astrophysics Data System (ADS)

    Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang

    2018-04-01

    S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint inversion is a reliable method capable of accurately estimating the density parameter as well as P-wave velocity and S-wave velocity, even when the seismic data is noisy with signal-to-noise ratio of 5.

  10. Reprint of "Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency".

    PubMed

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-08-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Study of Low Reynolds Number Effects on the Losses in Low-Pressure Turbine Blade Rows

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Dorney, Daniel J.

    1998-01-01

    Experimental data from jet-engine tests have indicated that unsteady blade row interactions and separation can have a significant impact on the efficiency of low-pressure turbine stages. Measured turbine efficiencies at takeoff can be as much as two points higher than those at cruise conditions. Several recent studies have revealed that Reynolds number effects may contribute to the lower efficiencies at cruise conditions. In the current study numerical experiments have been performed to study the models available for low Reynolds number flows, and to quantify the Reynolds number dependence of low-pressure turbine cascades and stages. The predicted aerodynamic results exhibit good agreement with design data.

  12. Line grouping using perceptual saliency and structure prediction for car detection in traffic scenes

    NASA Astrophysics Data System (ADS)

    Denasi, Sandra; Quaglia, Giorgio

    1993-08-01

    Autonomous and guide assisted vehicles make a heavy use of computer vision techniques to perceive the environment where they move. In this context, the European PROMETHEUS program is carrying on activities in order to develop autonomous vehicle monitoring that assists people to achieve safer driving. Car detection is one of the topics that are faced by the program. Our contribution proposes the development of this task in two stages: the localization of areas of interest and the formulation of object hypotheses. In particular, the present paper proposes a new approach that builds structural descriptions of objects from edge segmentations by using geometrical organization. This approach has been applied to the detection of cars in traffic scenes. We have analyzed images taken from a moving vehicle in order to formulate obstacle hypotheses: preliminary results confirm the efficiency of the method.

  13. A time accurate prediction of the viscous flow in a turbine stage including a rotor in motion

    NASA Astrophysics Data System (ADS)

    Shavalikul, Akamol

    In this current study, the flow field in the Pennsylvania State University Axial Flow Turbine Research Facility (AFTRF) was simulated. This study examined four sets of simulations. The first two sets are for an individual NGV and for an individual rotor. The last two sets use a multiple reference frames approach for a complete turbine stage with two different interface models: a steady circumferential average approach called a mixing plane model, and a time accurate flow simulation approach called a sliding mesh model. The NGV passage flow field was simulated using a three-dimensional Reynolds Averaged Navier-Stokes finite volume solver (RANS) with a standard kappa -- epsilon turbulence model. The mean flow distributions on the NGV surfaces and endwall surfaces were computed. The numerical solutions indicate that two passage vortices begin to be observed approximately at the mid axial chord of the NGV suction surface. The first vortex is a casing passage vortex which occurs at the corner formed by the NGV suction surface and the casing. This vortex is created by the interaction of the passage flow and the radially inward flow, while the second vortex, the hub passage vortex, is observed near the hub. These two vortices become stronger towards the NGV trailing edge. By comparing the results from the X/Cx = 1.025 plane and the X/Cx = 1.09 plane, it can be concluded that the NGV wake decays rapidly within a short axial distance downstream of the NGV. For the rotor, a set of simulations was carried out to examine the flow fields associated with different pressure side tip extension configurations, which are designed to reduce the tip leakage flow. The simulation results show that significant reductions in tip leakage mass flow rate and aerodynamic loss reduction are possible by using suitable tip platform extensions located near the pressure side corner of the blade tip. The computations used realistic turbine rotor inlet flow conditions in a linear cascade arrangement in the relative frame of reference; the boundary conditions for the computations were obtained from inlet flow measurements performed in the AFTRF. A complete turbine stage, including an NGV and a rotor row was simulated using the RANS solver with the SST kappa -- o turbulence model, with two different computational models for the interface between the rotating component and the stationary component. The first interface model, the circumferentially averaged mixing plane model, was solved for a fixed position of the rotor blades relative to the NGV in the stationary frame of reference. The information transferred between the NGV and rotor domains is obtained by averaging across the entire interface. The quasi-steady state flow characteristics of the AFTRF can be obtained from this interface model. After the model was validated with the existing experimental data, this model was not only used to investigate the flow characteristics in the turbine stage but also the effects of using pressure side rotor tip extensions. The tip leakage flow fields simulated from this model and from the linear cascade model show similar trends. More detailed understanding of unsteady characteristics of a turbine flow field can be obtained using the second type of interface model, the time accurate sliding mesh model. The potential flow interactions, wake characteristics, their effects on secondary flow formation, and the wake mixing process in a rotor passage were examined using this model. Furthermore, turbine stage efficiency and effects of tip clearance height on the turbine stage efficiency were also investigated. A comparison between the results from the circumferential average model and the time accurate flow model results is presented. It was found that the circumferential average model cannot accurately simulate flow interaction characteristics on the interface plane between the NGV trailing edge and the rotor leading edge. However, the circumferential average model does give accurate flow characteristics in the NGV domain and the rotor domain with less computational time and computer memory requirements. In contrast, the time accurate flow simulation can predict all unsteady flow characteristics occurring in the turbine stage, but with high computational resource requirements. (Abstract shortened by UMI.)

  14. Volt-VAR Optimization on American Electric Power Feeders in Northeast Columbus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Kevin P.; Weaver, T. F.

    2012-05-10

    In 2007 American Electric Power launched the gridSMART® initiative with the goals of increasing efficiency of the electricity delivery system and improving service to the end-use customers. As part of the initiative, a coordinated Volt-VAR system was deployed on eleven distribution feeders at five substations in the Northeast Columbus Ohio Area. The goal of the coordinated Volt-VAR system was to decrease the amount of energy necessary to provide end-use customers with the same quality of service. The evaluation of the Volt-VAR system performance was conducted in two stages. The first stage was composed of simulation, analysis, and estimation, while themore » second stage was composed of analyzing collected field data. This panel paper will examine the analysis conducted in both stages and present the estimated improvements in system efficiency.« less

  15. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann

    1993-01-01

    A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  16. Development of an unstructured solution adaptive method for the quasi-three-dimensional Euler and Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Jiang, Yi-Tsann; Usab, William J., Jr.

    1993-01-01

    A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.

  17. A Compact Two-Stage 120 W GaN High Power Amplifier for SweepSAR Radar Systems

    NASA Technical Reports Server (NTRS)

    Thrivikraman, Tushar; Horst, Stephen; Price, Douglas; Hoffman, James; Veilleux, Louise

    2014-01-01

    This work presents the design and measured results of a fully integrated switched power two-stage GaN HEMT high-power amplifier (HPA) achieving 60% power-added efficiency at over 120Woutput power. This high-efficiency GaN HEMT HPA is an enabling technology for L-band SweepSAR interferometric instruments that enable frequent repeat intervals and high-resolution imagery. The L-band HPA was designed using space-qualified state-of-the-art GaN HEMT technology. The amplifier exhibits over 34 dB of power gain at 51 dBm of output power across an 80 MHz bandwidth. The HPA is divided into two stages, an 8 W driver stage and 120 W output stage. The amplifier is designed for pulsed operation, with a high-speed DC drain switch operating at the pulsed-repetition interval and settles within 200 ns. In addition to the electrical design, a thermally optimized package was designed, that allows for direct thermal radiation to maintain low-junction temperatures for the GaN parts maximizing long-term reliability. Lastly, real radar waveforms are characterized and analysis of amplitude and phase stability over temperature demonstrate ultra-stable operation over temperature using integrated bias compensation circuitry allowing less than 0.2 dB amplitude variation and 2 deg phase variation over a 70 C range.

  18. The Problem of Size in Robust Design

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri

    1997-01-01

    To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.

  19. Deep Learning and Insomnia: Assisting Clinicians With Their Diagnosis.

    PubMed

    Shahin, Mostafa; Ahmed, Beena; Hamida, Sana Tmar-Ben; Mulaffer, Fathima Lamana; Glos, Martin; Penzel, Thomas

    2017-11-01

    Effective sleep analysis is hampered by the lack of automated tools catering to disordered sleep patterns and cumbersome monitoring hardware. In this paper, we apply deep learning on a set of 57 EEG features extracted from a maximum of two EEG channels to accurately differentiate between patients with insomnia or controls with no sleep complaints. We investigated two different approaches to achieve this. The first approach used EEG data from the whole sleep recording irrespective of the sleep stage (stage-independent classification), while the second used only EEG data from insomnia-impacted specific sleep stages (stage-dependent classification). We trained and tested our system using both healthy and disordered sleep collected from 41 controls and 42 primary insomnia patients. When compared with manual assessments, an NREM + REM based classifier had an overall discrimination accuracy of 92% and 86% between two groups using both two and one EEG channels, respectively. These results demonstrate that deep learning can be used to assist in the diagnosis of sleep disorders such as insomnia.

  20. Structural changes of green roof growing substrate layer studied by X-ray CT

    NASA Astrophysics Data System (ADS)

    Jelinkova, Vladimira; Sacha, Jan; Dohnal, Michal; Snehota, Michal

    2017-04-01

    Increasing interest in green infrastructure linked with newly implemented legislation/rules/laws worldwide opens up research potential for field of soil hydrology. A better understanding of function of engineered soils involved in green infrastructure solutions such as green roofs or rain garden is needed. A soil layer is considered as a highly significant component of the aforesaid systems. In comparison with a natural soil, the engineered soil is assumed to be the more challenging case due to rapid structure changes early stages after its build-up. The green infrastructure efficiency depends on the physical and chemical properties of the soil, which are, in the case of engineered soils, a function of its initial composition and subsequent soil formation processes. The project presented in this paper is focused on fundamental processes in the relatively thick layer of engineered soil. The initial structure development, during which the pore geometry is altered by the growth of plant roots, water influx, solid particles translocation and other soil formation processes, is investigated with the help of noninvasive imaging technique  X-ray computed tomography. The soil development has been studied on undisturbed soil samples taken periodically from green roof test system during early stages of its life cycle. Two approaches and sample sizes were employed. In the first approach, undisturbed samples (volume of about 63 cm3) were taken each time from the test site and scanned by X-ray CT. In the second approach, samples (volume of about 630 cm3) were permanently installed at the test site and has been repeatedly removed to perform X-ray CT imaging. CT-derived macroporosity profiles reveal significant temporal changes of soil structure. Clogging of pores by fine particles and fissures development are two most significant changes that would affect the green roof system efficiency. This work has been supported by the Ministry of Education, Youth and Sports within National Sustainability Programme I, project number LO1605 and with financial support from the Czech Science Foundation under project number GAČR 17-21011S.

  1. How can we tackle energy efficiency in IoT based smart buildings?

    PubMed

    Moreno, M Victoria; Úbeda, Benito; Skarmeta, Antonio F; Zamora, Miguel A

    2014-05-30

    Nowadays, buildings are increasingly expected to meet higher and more complex performance requirements. Among these requirements, energy efficiency is recognized as an international goal to promote energy sustainability of the planet. Different approaches have been adopted to address this goal, the most recent relating consumption patterns with human occupancy. In this work, we analyze what are the main parameters that should be considered to be included in any building energy management. The goal of this analysis is to help designers to select the most relevant parameters to control the energy consumption of buildings according to their context, selecting them as input data of the management system. Following this approach, we select three reference smart buildings with different contexts, and where our automation platform for energy monitoring is deployed. We carry out some experiments in these buildings to demonstrate the influence of the parameters identified as relevant in the energy consumption of the buildings. Then, in two of these buildings are applied different control strategies to save electrical energy. We describe the experiments performed and analyze the results. The first stages of this evaluation have already resulted in energy savings of about 23% in a real scenario.

  2. How can We Tackle Energy Efficiency in IoT Based Smart Buildings?

    PubMed Central

    Moreno, M. Victoria; Úbeda, Benito; Skarmeta, Antonio F.; Zamora, Miguel A.

    2014-01-01

    Nowadays, buildings are increasingly expected to meet higher and more complex performance requirements. Among these requirements, energy efficiency is recognized as an international goal to promote energy sustainability of the planet. Different approaches have been adopted to address this goal, the most recent relating consumption patterns with human occupancy. In this work, we analyze what are the main parameters that should be considered to be included in any building energy management. The goal of this analysis is to help designers to select the most relevant parameters to control the energy consumption of buildings according to their context, selecting them as input data of the management system. Following this approach, we select three reference smart buildings with different contexts, and where our automation platform for energy monitoring is deployed. We carry out some experiments in these buildings to demonstrate the influence of the parameters identified as relevant in the energy consumption of the buildings. Then, in two of these buildings are applied different control strategies to save electrical energy. We describe the experiments performed and analyze the results. The first stages of this evaluation have already resulted in energy savings of about 23% in a real scenario. PMID:24887040

  3. Theoretical and experimental investigations on the cooling capacity distributions at the stages in the thermally-coupled two-stage Stirling-type pulse tube cryocooler without external precooling

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Dang, Haizheng

    2017-03-01

    The two-stage Stirling-type pulse tube cryocooler (SPTC) has advantages in simultaneously providing the cooling powers at two different temperatures, and the capacity in distributing these cooling capacities between the stages is significant to its practical applications. In this paper, a theoretical model of the thermally-coupled two-stage SPTC without external precooling is established based on the electric circuit analogy with considering real gas effects, and the simulations of both the cooling performances and PV power distribution between stages are conducted. The results indicate that the PV power is inversely proportional to the acoustic impedance of each stage, and the cooling capacity distribution is determined by the cold finger cooling efficiency and the PV power into each stage together. The design methods of the cold fingers to achieve both the desired PV power and the cooling capacity distribution between the stages are summarized. The two-stage SPTC is developed and tested based on the above theoretical investigations, and the experimental results show that it can simultaneously achieve 0.69 W at 30 K and 3.1 W at 85 K with an electric input power of 330 W and a reject temperature of 300 K. The consistency between the simulated and the experimental results is observed and the theoretical investigations are experimentally verified.

  4. Cryotank Skin/Stringer Bondline Analysis

    NASA Technical Reports Server (NTRS)

    Nguyen, Bao

    1999-01-01

    The need for light weight structure for advanced launch systems have presented great challenges and led to the usage of composites materials in a variety of structural assemblies where joining of two or more components is imperative. Although joints can be mechanically bolted, adhesive bonding has always been a very desirable method for joining the composite components, particularly for the cryotank systems, to achieve maximum structural efficiency. This paper presents the analytical approach resulted from the conceptual development of the DC-Y composite cryotank, conducted under the NASA/Boeing NRA 8-12 Partnership, to support the continued progress of SSTO (Single-Stage-To-Orbit) concepts. One of the critical areas of design was identified as the bonded interface between the skin (tank wall) and stringer. The approach to analyze this critical area will be illustrated through the steps which were used to evaluate the structural integrity of the bondline. Detailed finite element models were developed and numerous coupon test data were also gathered as part of the approach. Future plan is to incorporate this approach as a building block in analyzing bondline for the cryotank systems of RLVs (Reusable Launch Vehicles).

  5. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    PubMed

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  6. A sequential bioequivalence design with a potential ethical advantage.

    PubMed

    Fuglsang, Anders

    2014-07-01

    This paper introduces a two-stage approach for evaluation of bioequivalence, where, in contrast to the designs of Diane Potvin and co-workers, two stages are mandatory regardless of the data obtained at stage 1. The approach is derived from Potvin's method C. It is shown that under circumstances with relatively high variability and relatively low initial sample size, this method has an advantage over Potvin's approaches in terms of sample sizes while controlling type I error rates at or below 5% with a minute occasional trade-off in power. Ethically and economically, the method may thus be an attractive alternative to the Potvin designs. It is also shown that when using the method introduced here, average total sample sizes are rather independent of initial sample size. Finally, it is shown that when a futility rule in terms of sample size for stage 2 is incorporated into this method, i.e., when a second stage can be abolished due to sample size considerations, there is often an advantage in terms of power or sample size as compared to the previously published methods.

  7. Two-stage approach to keyword spotting in handwritten documents

    NASA Astrophysics Data System (ADS)

    Haji, Mehdi; Ameri, Mohammad R.; Bui, Tien D.; Suen, Ching Y.; Ponson, Dominique

    2013-12-01

    Separation of keywords from non-keywords is the main problem in keyword spotting systems which has traditionally been approached by simplistic methods, such as thresholding of recognition scores. In this paper, we analyze this problem from a machine learning perspective, and we study several standard machine learning algorithms specifically in the context of non-keyword rejection. We propose a two-stage approach to keyword spotting and provide a theoretical analysis of the performance of the system which gives insights on how to design the classifier in order to maximize the overall performance in terms of F-measure.

  8. Contactless efficient two-stage solar concentrator for tubular absorber.

    PubMed

    Benítez, P; García, R; Miñano, J C

    1997-10-01

    The design of a new type of two-mirror solar concentrator for a tubular receiver, the XX concentrator, is presented. The main feature of the XX is that it has a sizable gap between the secondary mirror and the absorber and it still achieves concentrations close to the thermodynamic limit with high collection efficiencies. This characteristic makes the XX unique and, contrary to current two-stage designs, allows for the location of the secondary outside the evacuated tube. One of the XX concentrators presented achieves an average flux concentration within +/-0.73 deg of 91.1% of the thermodynamic limit with a collection efficiency of 96.8% (i.e., 3.2% of the rays incident on the primary mirror within +/-0.73 deg are rejected). Another XX design is 92.5% efficient and receives 95.1% of the maximum concentration. These values are the highest reported for practical concentrators, to our knowledge. The gap between the absorber and the secondary mirror is 6.8 and 10.5 times the absorber radius for each concentrator. Moreover the rim angle of the primary mirror is 98.8 and 104.4 deg in each case, which is of interest for the collector's good mechanical stability.

  9. Cervical cancer prevention in HIV-infected women using the "see and treat" approach in Botswana.

    PubMed

    Ramogola-Masire, Doreen; de Klerk, Ronny; Monare, Barati; Ratshaa, Bakgaki; Friedman, Harvey M; Zetola, Nicola M

    2012-03-01

    Cervical cancer is a major public health problem in resource-limited settings, particularly among HIV-infected women. Given the challenges of cytology-based approaches, the efficiency of new screening programs need to be assessed. Community and hospital-based clinics in Gaborone, Botswana. To determine the feasibility and efficiency of the "see and treat" approach using visual inspection acetic acid (VIA) and enhanced digital imaging (EDI) for cervical cancer prevention in HIV-infected women. A 2-tier community-based cervical cancer prevention program was implemented. HIV-infected women were screened by nurses at the community using the VIA/EDI approach. Low-grade lesions were treated with cryotherapy on the same visit. Women with complex lesions were referred to our second tier specialized clinic for evaluation. Weekly quality control assessments were performed by a specialist in collaboration with the nurses on all pictures taken. From March 2009 through January 2011, 2175 patients were screened for cervical cancer at our community-based clinic. Two hundred fifty-three patients (11.6%) were found to have low-grade lesions and received same-day cryotherapy. One thousand three hundred forty-seven (61.9%) women were considered to have a normal examination, and 575 (27.3%) were referred for further evaluation and treatment. Of the 1347 women initially considered to have normal exams, 267 (19.8%) were recalled based on weekly quality control assessments. Two hundred ten (78.6%) of the 267 recalled women, and 499 (86.8%) of the 575 referred women were seen at the referral clinic. Of these 709 women, 506 (71.4%) required additional treatment. Overall, 264 cervical intraepithelial neoplasia stage 2 or 3 were identified and treated, and 6 microinvasive cancers identified were referred for further management. Our "see and treat" cervical cancer prevention program using the VIA/EDI approach is a feasible, high-output and high-efficiency program, worthy of considering as an additional cervical cancer screening method in Botswana, especially for women with limited access to the current cytology-based screening services.

  10. Two-stage anaerobic digestion enables heavy metal removal.

    PubMed

    Selling, Robert; Håkansson, Torbjörn; Björnsson, Lovisa

    2008-01-01

    To fully exploit the environmental benefits of the biogas process, the digestate should be recycled as biofertiliser to agriculture. This practice can however be jeopardized by the presence of unwanted compounds such as heavy metals in the digestate. By using two-stage digestion, where the first stage includes hydrolysis/acidification and liquefaction of the substrate, heavy metals can be transferred to the leachate. From the leachate, metals can then be removed by adsorption. In this study, up to 70% of the Ni, 40% of the Zn and 25% of the Cd present in maize was removed when the leachate from hydrolysis was circulated over a macroporous polyacrylamide column for 6 days. For Cu and Pb, the mobilization in the hydrolytic stage was lower which resulted in a low removal. A more efficient two-stage process with improved substrate hydrolysis would give lower pH and/or longer periods with low pH in the hydrolytic stage. This is likely to increase metal mobilisation, and would open up for an excellent opportunity of heavy metal removal.

  11. A novel modular ANN architecture for efficient monitoring of gases/odours in real-time

    NASA Astrophysics Data System (ADS)

    Mishra, A.; Rajput, N. S.

    2018-04-01

    Data pre-processing is tremendously used for enhanced classification of gases. However, it suppresses the concentration variances of different gas samples. A classical solution of using single artificial neural network (ANN) architecture is also inefficient and renders degraded quantification. In this paper, a novel modular ANN design has been proposed to provide an efficient and scalable solution in real–time. Here, two separate ANN blocks viz. classifier block and quantifier block have been used to provide efficient and scalable gas monitoring in real—time. The classifier ANN consists of two stages. In the first stage, the Net 1-NDSRT has been trained to transform raw sensor responses into corresponding virtual multi-sensor responses using normalized difference sensor response transformation (NDSRT). These responses have been fed to the second stage (i.e., Net 2-classifier ). The Net 2-classifier has been trained to classify various gas samples to their respective class. Further, the quantifier block has parallel ANN modules, multiplexed to quantify each gas. Therefore, the classifier ANN decides class and quantifier ANN decides the exact quantity of the gas/odor present in the respective sample of that class.

  12. Rules and mechanisms for efficient two-stage learning in neural circuits

    PubMed Central

    Teşileanu, Tiberiu; Ölveczky, Bence; Balasubramanian, Vijay

    2017-01-01

    Trial-and-error learning requires evaluating variable actions and reinforcing successful variants. In songbirds, vocal exploration is induced by LMAN, the output of a basal ganglia-related circuit that also contributes a corrective bias to the vocal output. This bias is gradually consolidated in RA, a motor cortex analogue downstream of LMAN. We develop a new model of such two-stage learning. Using stochastic gradient descent, we derive how the activity in ‘tutor’ circuits (e.g., LMAN) should match plasticity mechanisms in ‘student’ circuits (e.g., RA) to achieve efficient learning. We further describe a reinforcement learning framework through which the tutor can build its teaching signal. We show that mismatches between the tutor signal and the plasticity mechanism can impair learning. Applied to birdsong, our results predict the temporal structure of the corrective bias from LMAN given a plasticity rule in RA. Our framework can be applied predictively to other paired brain areas showing two-stage learning. DOI: http://dx.doi.org/10.7554/eLife.20944.001 PMID:28374674

  13. A complex noise reduction method for improving visualization of SD-OCT skin biomedical images

    NASA Astrophysics Data System (ADS)

    Myakinin, Oleg O.; Zakharov, Valery P.; Bratchenko, Ivan A.; Kornilin, Dmitry V.; Khramov, Alexander G.

    2014-05-01

    In this paper we consider the original method of solving noise reduction problem for visualization's quality improvement of SD-OCT skin and tumors biomedical images. The principal advantages of OCT are high resolution and possibility of in vivo analysis. We propose a two-stage algorithm: 1) process of raw one-dimensional A-scans of SD-OCT and 2) remove a noise from the resulting B(C)-scans. The general mathematical methods of SD-OCT are unstable: if the noise of the CCD is 1.6% of the dynamic range then result distortions are already 25-40% of the dynamic range. We use at the first stage a resampling of A-scans and simple linear filters to reduce the amount of data and remove the noise of the CCD camera. The efficiency, improving productivity and conservation of the axial resolution when using this approach are showed. At the second stage we use an effective algorithms based on Hilbert-Huang Transform for more accurately noise peaks removal. The effectiveness of the proposed approach for visualization of malignant and benign skin tumors (melanoma, BCC etc.) and a significant improvement of SNR level for different methods of noise reduction are showed. Also in this study we consider a modification of this method depending of a specific hardware and software features of used OCT setup. The basic version does not require any hardware modifications of existing equipment. The effectiveness of proposed method for 3D visualization of tissues can simplify medical diagnosis in oncology.

  14. Bias due to two-stage residual-outcome regression analysis in genetic association studies.

    PubMed

    Demissie, Serkalem; Cupples, L Adrienne

    2011-11-01

    Association studies of risk factors and complex diseases require careful assessment of potential confounding factors. Two-stage regression analysis, sometimes referred to as residual- or adjusted-outcome analysis, has been increasingly used in association studies of single nucleotide polymorphisms (SNPs) and quantitative traits. In this analysis, first, a residual-outcome is calculated from a regression of the outcome variable on covariates and then the relationship between the adjusted-outcome and the SNP is evaluated by a simple linear regression of the adjusted-outcome on the SNP. In this article, we examine the performance of this two-stage analysis as compared with multiple linear regression (MLR) analysis. Our findings show that when a SNP and a covariate are correlated, the two-stage approach results in biased genotypic effect and loss of power. Bias is always toward the null and increases with the squared-correlation between the SNP and the covariate (). For example, for , 0.1, and 0.5, two-stage analysis results in, respectively, 0, 10, and 50% attenuation in the SNP effect. As expected, MLR was always unbiased. Since individual SNPs often show little or no correlation with covariates, a two-stage analysis is expected to perform as well as MLR in many genetic studies; however, it produces considerably different results from MLR and may lead to incorrect conclusions when independent variables are highly correlated. While a useful alternative to MLR under , the two -stage approach has serious limitations. Its use as a simple substitute for MLR should be avoided. © 2011 Wiley Periodicals, Inc.

  15. Analysis of casing treatment’s impact on the axial compressor model stage characteristics

    NASA Astrophysics Data System (ADS)

    Tribunskaia, K.; Kozhukhov, Y. V.

    2017-08-01

    There are special requirements for the compressors of aircraft engines. They must ensure maximum efficiency in a maximally large stable work zone Due to a high pressure ratio these stages are more susceptible to the losses from radial clearance. One of the approaches to reduce such losses is the application of above-rotor devices. In the following study there is considered the impact of such treatments on the compressor stage performance. Despite the fact that there is a sufficient amount of research about this issue, their results are contradictory. The use of these devices can affect the characteristics of compressor stage performance both positively and negatively. This study was conducted using the methods of computational fluid dynamics and was based on the NASA Rotor-37 geometry model stage. Results were obtained through the comparison of the characteristics of stages with and without above-rotor devices.

  16. Basalt fiber reinforced porous aggregates-geopolymer based cellular material

    NASA Astrophysics Data System (ADS)

    Luo, Xin; Xu, Jin-Yu; Li, Weimin

    2015-09-01

    Basalt fiber reinforced porous aggregates-geopolymer based cellular material (BFRPGCM) was prepared. The stress-strain curve has been worked out. The ideal energy-absorbing efficiency has been analyzed and the application prospect has been explored. The results show the following: fiber reinforced cellular material has successively sized pore structures; the stress-strain curve has two stages: elastic stage and yielding plateau stage; the greatest value of the ideal energy-absorbing efficiency of BFRPGCM is 89.11%, which suggests BFRPGCM has excellent energy-absorbing property. Thus, it can be seen that BFRPGCM is easy and simple to make, has high plasticity, low density and excellent energy-absorbing features. So, BFRPGCM is a promising energy-absorbing material used especially in civil defense engineering.

  17. End-pumped 300 W continuous-wave ytterbium-doped all-fiber laser with master oscillator multi-stage power amplifiers configuration.

    PubMed

    Yin, Shupeng; Yan, Ping; Gong, Mali

    2008-10-27

    An end-pumped ytterbium-doped all-fiber laser with 300 W output in continuous regime was reported, which was based on master oscillator multi-stage power amplifiers configuration. Monolithic fiber laser system consisted of an oscillator stage and two amplifier stages. Total optical-optical efficiency of monolithic fiber laser was approximately 65%, corresponding to 462 W of pump power coupled into laser system. We proposed a new method to connect power amplifier stage, which was crucial for the application of end-pumped combiner in high power MOPAs all-fiber laser.

  18. THTM: A template matching algorithm based on HOG descriptor and two-stage matching

    NASA Astrophysics Data System (ADS)

    Jiang, Yuanjie; Ruan, Li; Xiao, Limin; Liu, Xi; Yuan, Feng; Wang, Haitao

    2018-04-01

    We propose a novel method for template matching named THTM - a template matching algorithm based on HOG (histogram of gradient) and two-stage matching. We rely on the fast construction of HOG and the two-stage matching that jointly lead to a high accuracy approach for matching. TMTM give enough attention on HOG and creatively propose a twice-stage matching while traditional method only matches once. Our contribution is to apply HOG to template matching successfully and present two-stage matching, which is prominent to improve the matching accuracy based on HOG descriptor. We analyze key features of THTM and perform compared to other commonly used alternatives on a challenging real-world datasets. Experiments show that our method outperforms the comparison method.

  19. Improving biomass pyrolysis economics by integrating vapor and liquid phase upgrading

    DOE PAGES

    Iisa, Kristiina; Robichaud, David J.; Watson, Michael J.; ...

    2017-11-24

    Partial deoxygenation of bio-oil by catalytic fast pyrolysis with subsequent coupling and hydrotreating can lead to improved economics and will aid commercial deployment of pyrolytic conversion of biomass technologies. Biomass pyrolysis efficiently depolymerizes and deconstructs solid plant matter into carbonaceous molecules that, upon catalytic upgrading, can be used for fuels and chemicals. Upgrading strategies include catalytic deoxygenation of the vapors before they are condensed (in situ and ex situ catalytic fast pyrolysis), or hydrotreating following condensation of the bio-oil. In general, deoxygenation carbon efficiencies, one of the most important cost drivers, are typically higher for hydrotreating when compared to catalyticmore » fast pyrolysis alone. However, using catalytic fast pyrolysis as the primary conversion step can benefit the entire process chain by: (1) reducing the reactivity of the bio-oil, thereby mitigating issues with aging and transport and eliminating need for multi-stage hydroprocessing configurations; (2) producing a bio-oil that can be fractionated through distillation, which could lead to more efficient use of hydrogen during hydrotreating and facilitate integration in existing petroleum refineries; and (3) allowing for the separation of the aqueous phase. In this perspective, we investigate in detail a combination of these approaches, where some oxygen is removed during catalytic fast pyrolysis and the remainder removed by downstream hydrotreating, accompanied by carbon–carbon coupling reactions in either the vapor or liquid phase to maximize carbon efficiency toward value-driven products (e.g. fuels or chemicals). The economic impact of partial deoxygenation by catalytic fast pyrolysis will be explored in the context of an integrated two-stage process. In conclusion, improving the overall pyrolysis-based biorefinery economics by inclusion of production of high-value co-products will be examined.« less

  20. Improving biomass pyrolysis economics by integrating vapor and liquid phase upgrading

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iisa, Kristiina; Robichaud, David J.; Watson, Michael J.

    Partial deoxygenation of bio-oil by catalytic fast pyrolysis with subsequent coupling and hydrotreating can lead to improved economics and will aid commercial deployment of pyrolytic conversion of biomass technologies. Biomass pyrolysis efficiently depolymerizes and deconstructs solid plant matter into carbonaceous molecules that, upon catalytic upgrading, can be used for fuels and chemicals. Upgrading strategies include catalytic deoxygenation of the vapors before they are condensed (in situ and ex situ catalytic fast pyrolysis), or hydrotreating following condensation of the bio-oil. In general, deoxygenation carbon efficiencies, one of the most important cost drivers, are typically higher for hydrotreating when compared to catalyticmore » fast pyrolysis alone. However, using catalytic fast pyrolysis as the primary conversion step can benefit the entire process chain by: (1) reducing the reactivity of the bio-oil, thereby mitigating issues with aging and transport and eliminating need for multi-stage hydroprocessing configurations; (2) producing a bio-oil that can be fractionated through distillation, which could lead to more efficient use of hydrogen during hydrotreating and facilitate integration in existing petroleum refineries; and (3) allowing for the separation of the aqueous phase. In this perspective, we investigate in detail a combination of these approaches, where some oxygen is removed during catalytic fast pyrolysis and the remainder removed by downstream hydrotreating, accompanied by carbon–carbon coupling reactions in either the vapor or liquid phase to maximize carbon efficiency toward value-driven products (e.g. fuels or chemicals). The economic impact of partial deoxygenation by catalytic fast pyrolysis will be explored in the context of an integrated two-stage process. In conclusion, improving the overall pyrolysis-based biorefinery economics by inclusion of production of high-value co-products will be examined.« less

  1. A priori evaluation of two-stage cluster sampling for accuracy assessment of large-area land-cover maps

    USGS Publications Warehouse

    Wickham, J.D.; Stehman, S.V.; Smith, J.H.; Wade, T.G.; Yang, L.

    2004-01-01

    Two-stage cluster sampling reduces the cost of collecting accuracy assessment reference data by constraining sample elements to fall within a limited number of geographic domains (clusters). However, because classification error is typically positively spatially correlated, within-cluster correlation may reduce the precision of the accuracy estimates. The detailed population information to quantify a priori the effect of within-cluster correlation on precision is typically unavailable. Consequently, a convenient, practical approach to evaluate the likely performance of a two-stage cluster sample is needed. We describe such an a priori evaluation protocol focusing on the spatial distribution of the sample by land-cover class across different cluster sizes and costs of different sampling options, including options not imposing clustering. This protocol also assesses the two-stage design's adequacy for estimating the precision of accuracy estimates for rare land-cover classes. We illustrate the approach using two large-area, regional accuracy assessments from the National Land-Cover Data (NLCD), and describe how the a priorievaluation was used as a decision-making tool when implementing the NLCD design.

  2. Metabolic profiling of two maize (Zea mays L.) inbred lines inoculated with the nitrogen fixing plant-interacting bacteria Herbaspirillum seropedicae and Azospirillum brasilense

    PubMed Central

    Brusamarello-Santos, Liziane Cristina; Gilard, Françoise; Brulé, Lenaïg; Quilleré, Isabelle; Gourion, Benjamin; Ratet, Pascal; Maltempi de Souza, Emanuel; Lea, Peter J.; Hirel, Bertrand

    2017-01-01

    Maize roots can be colonized by free-living atmospheric nitrogen (N2)-fixing bacteria (diazotrophs). However, the agronomic potential of non-symbiotic N2-fixation in such an economically important species as maize, has still not been fully exploited. A preliminary approach to improve our understanding of the mechanisms controlling the establishment of such N2-fixing associations has been developed, using two maize inbred lines exhibiting different physiological characteristics. The bacterial-plant interaction has been characterized by means of a metabolomic approach. Two established model strains of Nif+ diazotrophic bacteria, Herbaspirillum seropedicae and Azospirillum brasilense and their Nif- couterparts defficient in nitrogenase activity, were used to evaluate the impact of the bacterial inoculation and of N2 fixation on the root and leaf metabolic profiles. The two N2-fixing bacteria have been used to inoculate two genetically distant maize lines (FV252 and FV2), already characterized for their contrasting physiological properties. Using a well-controlled gnotobiotic experimental system that allows inoculation of maize plants with the two diazotrophs in a N-free medium, we demonstrated that both maize lines were efficiently colonized by the two bacterial species. We also showed that in the early stages of plant development, both bacterial strains were able to reduce acetylene, suggesting that they contain functional nitrogenase activity and are able to efficiently fix atmospheric N2 (Fix+). The metabolomic approach allowed the identification of metabolites in the two maize lines that were representative of the N2 fixing plant-bacterial interaction, these included mannitol and to a lesser extend trehalose and isocitrate. Whilst other metabolites such as asparagine, although only exhibiting a small increase in maize roots following bacterial infection, were specific for the two Fix+ bacterial strains, in comparison to their Fix- counterparts. Moreover, a number of metabolites exhibited a maize-genotype specific pattern of accumulation, suggesting that the highly diverse maize genetic resources could be further exploited in terms of beneficial plant-bacterial interactions for optimizing maize growth, with reduced N fertilization inputs. PMID:28362815

  3. Metabolic profiling of two maize (Zea mays L.) inbred lines inoculated with the nitrogen fixing plant-interacting bacteria Herbaspirillum seropedicae and Azospirillum brasilense.

    PubMed

    Brusamarello-Santos, Liziane Cristina; Gilard, Françoise; Brulé, Lenaïg; Quilleré, Isabelle; Gourion, Benjamin; Ratet, Pascal; Maltempi de Souza, Emanuel; Lea, Peter J; Hirel, Bertrand

    2017-01-01

    Maize roots can be colonized by free-living atmospheric nitrogen (N2)-fixing bacteria (diazotrophs). However, the agronomic potential of non-symbiotic N2-fixation in such an economically important species as maize, has still not been fully exploited. A preliminary approach to improve our understanding of the mechanisms controlling the establishment of such N2-fixing associations has been developed, using two maize inbred lines exhibiting different physiological characteristics. The bacterial-plant interaction has been characterized by means of a metabolomic approach. Two established model strains of Nif+ diazotrophic bacteria, Herbaspirillum seropedicae and Azospirillum brasilense and their Nif- couterparts defficient in nitrogenase activity, were used to evaluate the impact of the bacterial inoculation and of N2 fixation on the root and leaf metabolic profiles. The two N2-fixing bacteria have been used to inoculate two genetically distant maize lines (FV252 and FV2), already characterized for their contrasting physiological properties. Using a well-controlled gnotobiotic experimental system that allows inoculation of maize plants with the two diazotrophs in a N-free medium, we demonstrated that both maize lines were efficiently colonized by the two bacterial species. We also showed that in the early stages of plant development, both bacterial strains were able to reduce acetylene, suggesting that they contain functional nitrogenase activity and are able to efficiently fix atmospheric N2 (Fix+). The metabolomic approach allowed the identification of metabolites in the two maize lines that were representative of the N2 fixing plant-bacterial interaction, these included mannitol and to a lesser extend trehalose and isocitrate. Whilst other metabolites such as asparagine, although only exhibiting a small increase in maize roots following bacterial infection, were specific for the two Fix+ bacterial strains, in comparison to their Fix- counterparts. Moreover, a number of metabolites exhibited a maize-genotype specific pattern of accumulation, suggesting that the highly diverse maize genetic resources could be further exploited in terms of beneficial plant-bacterial interactions for optimizing maize growth, with reduced N fertilization inputs.

  4. Two-stage Bayesian model to evaluate the effect of air pollution on chronic respiratory diseases using drug prescriptions.

    PubMed

    Blangiardo, Marta; Finazzi, Francesco; Cameletti, Michela

    2016-08-01

    Exposure to high levels of air pollutant concentration is known to be associated with respiratory problems which can translate into higher morbidity and mortality rates. The link between air pollution and population health has mainly been assessed considering air quality and hospitalisation or mortality data. However, this approach limits the analysis to individuals characterised by severe conditions. In this paper we evaluate the link between air pollution and respiratory diseases using general practice drug prescriptions for chronic respiratory diseases, which allow to draw conclusions based on the general population. We propose a two-stage statistical approach: in the first stage we specify a space-time model to estimate the monthly NO2 concentration integrating several data sources characterised by different spatio-temporal resolution; in the second stage we link the concentration to the β2-agonists prescribed monthly by general practices in England and we model the prescription rates through a small area approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. [Optimal irrigation index for cotton drip irrigation under film mulching based on the evaporation from pan with constant water level].

    PubMed

    Shen, Xiao-Jun; Zhang, Ji-Yang; Sun, Jing-Sheng; Gao, Yang; Li, Ming-Si; Liu, Hao; Yang, Gui-Sen

    2013-11-01

    A field experiment with two irrigation cycles and two irrigating water quotas at squaring stage and blossoming-boll forming stage was conducted in Urumqi of Xinjiang Autonomous Region, Northwest China in 2008-2009, aimed to explore the high-efficient irrigation index of cotton drip irrigation under film mulching. The effects of different water treatments on the seed yield, water consumption, and water use efficiency (WUE) of cotton were analyzed. In all treatments, there was a high correlation between the cotton water use and the evaporation from pan installed above the plant canopy. In high-yield cotton field (including the treatment T4 which had 10 days and 7 days of irrigation cycle with 30.0 mm and 37.5 mm of irrigating water quota at squaring stage and blossoming-boll forming stage, respectively in 2008, and the treatment T1 having 7 days of irrigation cycle with 22.5 mm and 37.5 mm of irrigating water quota at squaring stage and blossoming-boll forming stage, respectively in 2009), the pan-crop coefficient (Kp) at seedling stage, squaring stage, blossoming-boll forming stage, and boll opening stage was 0.29-0.30, 0.52-0.53, 0.74-0.88, and 0.19-0.20, respectively. As compared with the other treatments, T4 had the highest seed cotton yield (5060 kg x hm(-2)) and the highest WUE (1.00 kg x m(-3)) in 2008, whereas T1 had the highest seed cotton yield (4467 kg x hm(-2)) and the highest WUE (0.99 kg x m(-3)) in 2009. The averaged cumulative pan evaporation in 7 days and 10 days at squaring stage was 40-50 mm and 60-70 mm, respectively, and that in 7 days at blossoming-boll forming stage was 40-50 mm. It was suggested that in Xinjiang cotton area, irrigating 45 mm water for seedling emergence, no irrigation both at seedling stage and at boll opening stage, and irrigation was started when the pan evaporation reached 45-65 mm and 45 mm at squaring stage and blossoming-boll stage, respectively, the irrigating water quota could be determined by multiplying cumulative pan evaporation with Kp (the Ko was taken as 0.5, 0.75, 0.85, and 0.75 at squaring stage, early blossoming, full-blossoming, and late blossoming stage, respectively), which could be the high efficient irrigation index to obtain high yield and WUE in drip irrigation cotton field and to save irrigation water resources.

  6. Differentially co-expressed interacting protein pairs discriminate samples under distinct stages of HIV type 1 infection.

    PubMed

    Yoon, Dukyong; Kim, Hyosil; Suh-Kim, Haeyoung; Park, Rae Woong; Lee, KiYoung

    2011-01-01

    Microarray analyses based on differentially expressed genes (DEGs) have been widely used to distinguish samples across different cellular conditions. However, studies based on DEGs have not been able to clearly determine significant differences between samples of pathophysiologically similar HIV-1 stages, e.g., between acute and chronic progressive (or AIDS) or between uninfected and clinically latent stages. We here suggest a novel approach to allow such discrimination based on stage-specific genetic features of HIV-1 infection. Our approach is based on co-expression changes of genes known to interact. The method can identify a genetic signature for a single sample as contrasted with existing protein-protein-based analyses with correlational designs. Our approach distinguishes each sample using differentially co-expressed interacting protein pairs (DEPs) based on co-expression scores of individual interacting pairs within a sample. The co-expression score has positive value if two genes in a sample are simultaneously up-regulated or down-regulated. And the score has higher absolute value if expression-changing ratios are similar between the two genes. We compared characteristics of DEPs with that of DEGs by evaluating their usefulness in separation of HIV-1 stage. And we identified DEP-based network-modules and their gene-ontology enrichment to find out the HIV-1 stage-specific gene signature. Based on the DEP approach, we observed clear separation among samples from distinct HIV-1 stages using clustering and principal component analyses. Moreover, the discrimination power of DEPs on the samples (70-100% accuracy) was much higher than that of DEGs (35-45%) using several well-known classifiers. DEP-based network analysis also revealed the HIV-1 stage-specific network modules; the main biological processes were related to "translation," "RNA splicing," "mRNA, RNA, and nucleic acid transport," and "DNA metabolism." Through the HIV-1 stage-related modules, changing stage-specific patterns of protein interactions could be observed. DEP-based method discriminated the HIV-1 infection stages clearly, and revealed a HIV-1 stage-specific gene signature. The proposed DEP-based method might complement existing DEG-based approaches in various microarray expression analyses.

  7. A Pre-Mixed Shock-Induced-Combustion Approach to Inlet and Combustor Design for Hypersonic Applications

    NASA Technical Reports Server (NTRS)

    Weidner, John P.

    1996-01-01

    The need for efficient access to space has created interest in airbreathing propulsion as a means of achieving that goal. The NASP program explored a single-stage-to-orbit approach which could require scramjet airbreathing propulsion out to Mach 16 to 20. Recent interest in global access could require hypersonic cruise engines operating efficiently in the Mach 10 to 12 speed range. A common requirement of both these types of propulsion systems is that they would have to be fully integrated with the aero configuration so that the forebody becomes a part of the external compression inlet and the nozzle expansion is completed on the vehicle aftbody.

  8. The Supersonic Axial-Flow Compressor

    NASA Technical Reports Server (NTRS)

    Kantrowitz, Arthur

    1950-01-01

    An investigation has been made to explore the possibilities of axial-flow compressors operating with supersonic velocities into the blade rows. Preliminary calculations showed that very high pressure ratios across a stage, together with somewhat increased mass flows, were apparently possible with compressors which decelerated air through the speed of sound in their blading. The first phase of the investigation was the development of efficient supersonic diffusers to decelerate air through the speed of sound. The present report is largely a general discussion of some of the essential aerodynamics of single-stage supersonic axial-flow compressors. As an approach to the study of supersonic compressors, three possible velocity diagrams are discussed briefly. Because of the encouraging results of this study, an experimental single-stage supersonic compressor has been constructed and tested in Freon-12. In this compressor, air decelerates through the speed of sound in the rotor blading and enters the stators at subsonic speeds. A pressure ratio of about 1.8 at an efficiency of about 80 percent has been obtained.

  9. Long-term outcome of cochlear implant in patients with chronic otitis media: one-stage surgery is equivalent to two-stage surgery.

    PubMed

    Jang, Jeong Hun; Park, Min-Hyun; Song, Jae-Jin; Lee, Jun Ho; Oh, Seung Ha; Kim, Chong-Sun; Chang, Sun O

    2015-01-01

    This study compared long-term speech performance after cochlear implantation (CI) between surgical strategies in patients with chronic otitis media (COM). Thirty patients with available open-set sentence scores measured more than 2 yr postoperatively were included: 17 who received one-stage surgeries (One-stage group), and the other 13 underwent two-stage surgeries (Two-stage group). Preoperative inflammatory status, intraoperative procedures, postoperative outcomes were compared. Among 17 patients in One-stage group, 12 underwent CI accompanied with the eradication of inflammation; CI without eradicating inflammation was performed on 3 patients; 2 underwent CIs via the transcanal approach. Thirteen patients in Two-stage group received the complete eradication of inflammation as first-stage surgery, and CI was performed as second-stage surgery after a mean interval of 8.2 months. Additional control of inflammation was performed in 2 patients at second-stage surgery for cavity problem and cholesteatoma, respectively. There were 2 cases of electrode exposure as postoperative complication in the two-stage group; new electrode arrays were inserted and covered by local flaps. The open-set sentence scores of Two-stage group were not significantly higher than those of One-stage group at 1, 2, 3, and 5 yr postoperatively. Postoperative long-term speech performance is equivalent when either of two surgical strategies is used to treat appropriately selected candidates.

  10. Differential utilization of ash phloem by emerald ash borer larvae: Ash species and larval stage effects

    Treesearch

    Yigen Chen; Michael D. Ulyshen; Therese M. Poland

    2012-01-01

    Two experiments were performed to determine the extent to which ash species (black, green and white) and larval developmental stage (second, third and fourth instar) affect the efficiency of phloem amino acid utilization by emerald ash borer (EAB) Agrilus planipennis Fairmaire (Coleoptera: Buprestidae) larvae. EAB larvae generally utilized green ash...

  11. Energy efficient solvent regeneration process for carbon dioxide capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Shaojun; Meyer, Howard S.; Li, Shiguang

    A process for removing carbon dioxide from a carbon dioxide-loaded solvent uses two stages of flash apparatus. Carbon dioxide is flashed from the solvent at a higher temperature and pressure in the first stage, and a lower temperature and pressure in the second stage, and is fed to a multi-stage compression train for high pressure liquefaction. Because some of the carbon dioxide fed to the compression train is already under pressure, less energy is required to further compress the carbon dioxide to a liquid state, compared to conventional processes.

  12. Theorizing E-Learning Participation: A Study of the HRD Online Communities in the USA

    ERIC Educational Resources Information Center

    Wang, Greg G.

    2010-01-01

    Purpose: This study sets out to investigate the e-learning participation and completion phenomenon in the US corporate HRD online communities and to explore determinants of e-learning completion. Design/methodology/approach: Based on the HRD Learning Participation Theory (LPT), this study takes a two-stage approach. Stage one adopts an interview…

  13. Modern prodrug design for targeted oral drug delivery.

    PubMed

    Dahan, Arik; Zimmermann, Ellen M; Ben-Shabat, Shimon

    2014-10-14

    The molecular information that became available over the past two decades significantly influenced the field of drug design and delivery at large, and the prodrug approach in particular. While the traditional prodrug approach was aimed at altering various physiochemical parameters, e.g., lipophilicity and charge state, the modern approach to prodrug design considers molecular/cellular factors, e.g., membrane influx/efflux transporters and cellular protein expression and distribution. This novel targeted-prodrug approach is aimed to exploit carrier-mediated transport for enhanced intestinal permeability, as well as specific enzymes to promote activation of the prodrug and liberation of the free parent drug. The purpose of this article is to provide a concise overview of this modern prodrug approach, with useful successful examples for its utilization. In the past the prodrug approach used to be viewed as a last option strategy, after all other possible solutions were exhausted; nowadays this is no longer the case, and in fact, the prodrug approach should be considered already in the very earliest development stages. Indeed, the prodrug approach becomes more and more popular and successful. A mechanistic prodrug design that aims to allow intestinal permeability by specific transporters, as well as activation by specific enzymes, may greatly improve the prodrug efficiency, and allow for novel oral treatment options.

  14. Evaluation of a novel personal nanoparticle sampler.

    PubMed

    Zhou, Yue; Irshad, Hammad; Tsai, Chuen-Jinn; Hung, Shao-Ming; Cheng, Yung-Sung

    2014-02-01

    This work investigated the performance in terms of collection efficiency and aspiration efficiency of a personal sampler capable of collecting ultrafine particles (nanoparticles) in the occupational environment. This sampler consists of a cyclone for respirable particle classification, micro-orifice impactor stages with an acceleration nozzle to achieve nanoparticle classification and a backup filter to collect nanoparticles. Collection efficiencies of the cyclone and impactor stages were determined using monodisperse polystyrene latex and silver particles, respectively. Calibration of the cyclone and impactor stages showed 50% cut-off diameters of 3.95 μm and 94.7 nm meeting the design requirements. Aspiration efficiencies of the sampler were tested in a wind tunnel with wind speeds of 0.5, 1.0, and 1.5 m s(-1). The test samplers were mounted on a full size mannequin with three orientations toward the wind direction (0°, 90°, and 180°). Monodisperse oleic acid aerosols tagged with sodium fluorescein in the size range of 2 to 10 μm were used in the test. For particles smaller than 2 μm, the fluorescent polystyrene latex particles were generated by using nebulizers. For comparison of the aspiration efficiency, a NIOSH two-stage personal bioaerosol sampler was also tested. Results showed that the orientation-averaged aspiration efficiency for both samplers was close to the inhalable fraction curve. However, the direction of wind strongly affected the aspiration efficiency. The results also showed that the aspiration efficiency was not affected by the ratio of free-stream velocity to the velocity through the sampler orifice. Our evaluation showed that the current design of the personal sampler met the designed criteria for collecting nanoparticles ≤100 nm in occupational environments.

  15. Method for driving two-phase turbines with enhanced efficiency

    NASA Technical Reports Server (NTRS)

    Elliott, D. G. (Inventor)

    1985-01-01

    A method for driving a two phase turbine characterized by an output shaft having at least one stage including a bladed rotor connected in driving relation with the shaft is described. A two phase fluid is introduced into one stage at a known flow velocity and caused to pass through the rotor for imparing angular velocity thereto. The angular velocity of the rotor is maintained at a value such that the angular velocity of the tips of the blades of the rotor is a velocity equal to at least 50% of the velocity of the flow of the two phase fluid.

  16. Engineering and Two-Stage Evolution of a Lignocellulosic Hydrolysate-Tolerant Saccharomyces cerevisiae Strain for Anaerobic Fermentation of Xylose from AFEX Pretreated Corn Stover

    PubMed Central

    Parreiras, Lucas S.; Breuer, Rebecca J.; Avanasi Narasimhan, Ragothaman; Higbee, Alan J.; La Reau, Alex; Tremaine, Mary; Qin, Li; Willis, Laura B.; Bice, Benjamin D.; Bonfert, Brandi L.; Pinhancos, Rebeca C.; Balloon, Allison J.; Uppugundla, Nirmal; Liu, Tongjun; Li, Chenlin; Tanjore, Deepti; Ong, Irene M.; Li, Haibo; Pohlmann, Edward L.; Serate, Jose; Withers, Sydnor T.; Simmons, Blake A.; Hodge, David B.; Westphall, Michael S.; Coon, Joshua J.; Dale, Bruce E.; Balan, Venkatesh; Keating, David H.; Zhang, Yaoping; Landick, Robert; Gasch, Audrey P.; Sato, Trey K.

    2014-01-01

    The inability of the yeast Saccharomyces cerevisiae to ferment xylose effectively under anaerobic conditions is a major barrier to economical production of lignocellulosic biofuels. Although genetic approaches have enabled engineering of S. cerevisiae to convert xylose efficiently into ethanol in defined lab medium, few strains are able to ferment xylose from lignocellulosic hydrolysates in the absence of oxygen. This limited xylose conversion is believed to result from small molecules generated during biomass pretreatment and hydrolysis, which induce cellular stress and impair metabolism. Here, we describe the development of a xylose-fermenting S. cerevisiae strain with tolerance to a range of pretreated and hydrolyzed lignocellulose, including Ammonia Fiber Expansion (AFEX)-pretreated corn stover hydrolysate (ACSH). We genetically engineered a hydrolysate-resistant yeast strain with bacterial xylose isomerase and then applied two separate stages of aerobic and anaerobic directed evolution. The emergent S. cerevisiae strain rapidly converted xylose from lab medium and ACSH to ethanol under strict anaerobic conditions. Metabolomic, genetic and biochemical analyses suggested that a missense mutation in GRE3, which was acquired during the anaerobic evolution, contributed toward improved xylose conversion by reducing intracellular production of xylitol, an inhibitor of xylose isomerase. These results validate our combinatorial approach, which utilized phenotypic strain selection, rational engineering and directed evolution for the generation of a robust S. cerevisiae strain with the ability to ferment xylose anaerobically from ACSH. PMID:25222864

  17. Ultrasonic Linear Motor with Two Independent Vibrations

    NASA Astrophysics Data System (ADS)

    Muneishi, Takeshi; Tomikawa, Yoshiro

    2004-09-01

    We propose a new structure of an ultrasonic linear motor in order to solve the problems of high-power ultrasonic linear motors that drive the XY-stage for electron beam equipment and to expand the application fields of the motor. We pay special attention to the following three points: (1) the vibration in two directions of the ultrasonic linear motor should not influence mutually each other, (2) the vibration in two directions should be divided into the stage traveling direction and the pressing direction of the ultrasonic linear motor, and (3) the rigidity of the stage traveling direction of the ultrasonic linear motor should be increased. As a result, the supporting method of ultrasonic linear motors is simplified. The efficiency of the motor is improved and temperature rise is reduced. The stage position drift is also improved.

  18. Combined Simulated Annealing and Genetic Algorithm Approach to Bus Network Design

    NASA Astrophysics Data System (ADS)

    Liu, Li; Olszewski, Piotr; Goh, Pong-Chai

    A new method - combined simulated annealing (SA) and genetic algorithm (GA) approach is proposed to solve the problem of bus route design and frequency setting for a given road network with fixed bus stop locations and fixed travel demand. The method involves two steps: a set of candidate routes is generated first and then the best subset of these routes is selected by the combined SA and GA procedure. SA is the main process to search for a better solution to minimize the total system cost, comprising user and operator costs. GA is used as a sub-process to generate new solutions. Bus demand assignment on two alternative paths is performed at the solution evaluation stage. The method was implemented on four theoretical grid networks of different size and a benchmark network. Several GA operators (crossover and mutation) were utilized and tested for their effectiveness. The results show that the proposed method can efficiently converge to the optimal solution on a small network but computation time increases significantly with network size. The method can also be used for other transport operation management problems.

  19. Validating Pseudo-dynamic Source Models against Observed Ground Motion Data at the SCEC Broadband Platform, Ver 16.5

    NASA Astrophysics Data System (ADS)

    Song, S. G.

    2016-12-01

    Simulation-based ground motion prediction approaches have several benefits over empirical ground motion prediction equations (GMPEs). For instance, full 3-component waveforms can be produced and site-specific hazard analysis is also possible. However, it is important to validate them against observed ground motion data to confirm their efficiency and validity before practical uses. There have been community efforts for these purposes, which are supported by the Broadband Platform (BBP) project at the Southern California Earthquake Center (SCEC). In the simulation-based ground motion prediction approaches, it is a critical element to prepare a possible range of scenario rupture models. I developed a pseudo-dynamic source model for Mw 6.5-7.0 by analyzing a number of dynamic rupture models, based on 1-point and 2-point statistics of earthquake source parameters (Song et al. 2014; Song 2016). In this study, the developed pseudo-dynamic source models were tested against observed ground motion data at the SCEC BBP, Ver 16.5. The validation was performed at two stages. At the first stage, simulated ground motions were validated against observed ground motion data for past events such as the 1992 Landers and 1994 Northridge, California, earthquakes. At the second stage, they were validated against the latest version of empirical GMPEs, i.e., NGA-West2. The validation results show that the simulated ground motions produce ground motion intensities compatible with observed ground motion data at both stages. The compatibility of the pseudo-dynamic source models with the omega-square spectral decay and the standard deviation of the simulated ground motion intensities are also discussed in the study

  20. Numerical Simulation of Energy Conversion Mechanism in Electric Explosion

    NASA Astrophysics Data System (ADS)

    Wanjun, Wang; Junjun, Lv; Mingshui, Zhu; Qiubo, Fu; EFIs Integration R&D Group Team

    2017-06-01

    Electric explosion happens when micron-scale metal films such as copper film is stimulated by short-time current pulse, while generating high temperature and high pressure plasma. The expansion process of the plasma plays an important role in the study of the generation of shock waves and the study of the EOS of matter under high pressure. In this paper, the electric explosion process is divided into two stages: the energy deposition stage and the quasi-isentropic expansion stage, and a dynamic EOS of plasma considering the energy replenishment is established. On this basis, flyer driven by plasma is studied numerically, the pressure and the internal energy of plasma in the energy deposition stage and the quasi - isentropic expansion stage are obtained by comparing the velocity history of the flyer with the experimental results. An energy conversion model is established, and the energy conversion efficiency of each process is obtained, and the influence of impedance matching relationship between flyer and metal plasma on the energy conversion efficiency is proposed in this paper.

  1. Multi-staged repair of contaminated primary and recurrent giant incisional herniae in the same hospital admission: a proposal for a new approach.

    PubMed

    Siddique, K; Shrestha, A; Basu, S

    2014-02-01

    Repair of primary and recurrent giant incisional herniae is extremely challenging and more so in the face of surgical field contamination. Literature supports the single- and multi-staged approaches including the use of biological meshes for these difficult patients with their associated benefits and limitations. This is a retrospective analysis of a prospective study of five patients who were successfully treated through a multi-staged approach but in the same hospital admission, not previously described, for the repair of contaminated primary and recurrent giant incisional herniae in a district general hospital between 2009 and 2012. Patient demographics including their BMI and ASA, previous and current operative history including complications and follow-up were collected in a secure database. The first stage involved the eradication of contamination, and the second stage was the definitive hernia repair with the new generation-coated synthetic meshes. Of the five patients, three were men and two women with a mean age of 58 (45-74) years. Two patients had grade 4 while the remaining had grade 3 hernia as per the hernia grading system with a mean BMI of 35 (30-46). All patients required extensive adhesiolysis, bowel resection and anastomoses and wash out. Hernial defect was measured as 204* (105-440) cm(2), size of mesh implant was 568* (375-930) cm(2) and the total duration of operation (1st + 2nd Stage) was 354* (270-540) min. Duration of hospital stay was 11* (7-19) days with a follow-up of 17* (6-36) months. We believe that our multi-staged approach in the same hospital admission (for the repair of contaminated primary and recurrent giant incisional herniae), excludes the disadvantages of a true multi-staged approach and simultaneously minimises the risks and complications associated with a single-staged repair, can be adopted for these challenging patients for a successful outcome (* indicates mean).

  2. Adaptive linear rank tests for eQTL studies

    PubMed Central

    Szymczak, Silke; Scheinhardt, Markus O.; Zeller, Tanja; Wild, Philipp S.; Blankenberg, Stefan; Ziegler, Andreas

    2013-01-01

    Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal–Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. PMID:22933317

  3. Adaptive linear rank tests for eQTL studies.

    PubMed

    Szymczak, Silke; Scheinhardt, Markus O; Zeller, Tanja; Wild, Philipp S; Blankenberg, Stefan; Ziegler, Andreas

    2013-02-10

    Expression quantitative trait loci (eQTL) studies are performed to identify single-nucleotide polymorphisms that modify average expression values of genes, proteins, or metabolites, depending on the genotype. As expression values are often not normally distributed, statistical methods for eQTL studies should be valid and powerful in these situations. Adaptive tests are promising alternatives to standard approaches, such as the analysis of variance or the Kruskal-Wallis test. In a two-stage procedure, skewness and tail length of the distributions are estimated and used to select one of several linear rank tests. In this study, we compare two adaptive tests that were proposed in the literature using extensive Monte Carlo simulations of a wide range of different symmetric and skewed distributions. We derive a new adaptive test that combines the advantages of both literature-based approaches. The new test does not require the user to specify a distribution. It is slightly less powerful than the locally most powerful rank test for the correct distribution and at least as powerful as the maximin efficiency robust rank test. We illustrate the application of all tests using two examples from different eQTL studies. Copyright © 2012 John Wiley & Sons, Ltd.

  4. A Study of the Potential to Detect Caries Lesions at the White-Spot Stage Using V(Z) Technique

    NASA Astrophysics Data System (ADS)

    Bakulin, E. Y.; Denisova, L. A.; Maev, R. Gr.

    Current wide-spread methods of non-destructive methods of caries diagnostics, such as X-ray techniques, do not provide the possibility to efficiently detect enamel caries lesions at the beginning ("white-spot") stage, when the tooth tissue is only slightly altered and no loss of the tissue occurs. Therefore, it is of paramount importance to develop new, more sensitive methods of caries diagnostics. In this paper, certain aspects of the ultrasonic approach to the problem are discussed - in particular, detection of the enamel's surface caries at the white-spot stage with a focused ultrasonic sensor positioned in front of the caries lesion (without cross-sectioning the tooth). Theoretical model using V(z) approach for layered media was applied to perform computer simulations resulting in V(z) curves for the different parameters of carious tissue and the degree of degradation. The curves were analyzed and it was shown that, comparing to a short-pulse/echo technique, V(z) approach provides much better distinction between sound and carious enamel and even makes possible to evaluate the degree of demineralization.

  5. Residential Two-Stage Gas Furnaces - Do They Save Energy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lekov, Alex; Franco, Victor; Lutz, James

    2006-05-12

    Residential two-stage gas furnaces account for almost a quarter of the total number of models listed in the March 2005 GAMA directory of equipment certified for sale in the United States. Two-stage furnaces are expanding their presence in the market mostly because they meet consumer expectations for improved comfort. Currently, the U.S. Department of Energy (DOE) test procedure serves as the method for reporting furnace total fuel and electricity consumption under laboratory conditions. In 2006, American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE) proposed an update to its test procedure which corrects some of the discrepancies found in themore » DOE test procedure and provides an improved methodology for calculating the energy consumption of two-stage furnaces. The objectives of this paper are to explore the differences in the methods for calculating two-stage residential gas furnace energy consumption in the DOE test procedure and in the 2006 ASHRAE test procedure and to compare test results to research results from field tests. Overall, the DOE test procedure shows a reduction in the total site energy consumption of about 3 percent for two-stage compared to single-stage furnaces at the same efficiency level. In contrast, the 2006 ASHRAE test procedure shows almost no difference in the total site energy consumption. The 2006 ASHRAE test procedure appears to provide a better methodology for calculating the energy consumption of two-stage furnaces. The results indicate that, although two-stage technology by itself does not save site energy, the combination of two-stage furnaces with BPM motors provides electricity savings, which are confirmed by field studies.« less

  6. Lunar Entry Downmode Options for Orion

    NASA Technical Reports Server (NTRS)

    Smith, Kelly; Rea, Jeremy

    2016-01-01

    Traditional ballistic entry does not scale well to higher energy entry trajectories. Clutch algorithm is a two-stage approach with the capture stage and load relief stage. Clutch may offer expansion of the operational entry corridor. Clutch is a candidate solution for Exploration Mission-2's degraded entry mode.

  7. Treatment of natural rubber processing wastewater using a combination system of a two-stage up-flow anaerobic sludge blanket and down-flow hanging sponge system.

    PubMed

    Tanikawa, D; Syutsubo, K; Hatamoto, M; Fukuda, M; Takahashi, M; Choeisai, P K; Yamaguchi, T

    2016-01-01

    A pilot-scale experiment of natural rubber processing wastewater treatment was conducted using a combination system consisting of a two-stage up-flow anaerobic sludge blanket (UASB) and a down-flow hanging sponge (DHS) reactor for more than 10 months. The system achieved a chemical oxygen demand (COD) removal efficiency of 95.7% ± 1.3% at an organic loading rate of 0.8 kg COD/(m(3).d). Bacterial activity measurement of retained sludge from the UASB showed that sulfate-reducing bacteria (SRB), especially hydrogen-utilizing SRB, possessed high activity compared with methane-producing bacteria (MPB). Conversely, the acetate-utilizing activity of MPB was superior to SRB in the second stage of the reactor. The two-stage UASB-DHS system can reduce power consumption by 95% and excess sludge by 98%. In addition, it is possible to prevent emissions of greenhouse gases (GHG), such as methane, using this system. Furthermore, recovered methane from the two-stage UASB can completely cover the electricity needs for the operation of the two-stage UASB-DHS system, accounting for approximately 15% of the electricity used in the natural rubber manufacturing process.

  8. Chromium (Ⅵ) removal from aqueous solutions through powdered activated carbon countercurrent two-stage adsorption.

    PubMed

    Wang, Wenqiang

    2018-01-01

    To exploit the adsorption capacity of commercial powdered activated carbon (PAC) and to improve the efficiency of Cr(VI) removal from aqueous solutions, the adsorption of Cr(VI) by commercial PAC and the countercurrent two-stage adsorption (CTA) process was investigated. Different adsorption kinetics models and isotherms were compared, and the pseudo-second-order model and the Langmuir and Freundlich models fit the experimental data well. The Cr(VI) removal efficiency was >80% and was improved by 37% through the CTA process compared with the conventional single-stage adsorption process when the initial Cr(VI) concentration was 50 mg/L with a PAC dose of 1.250 g/L and a pH of 3. A calculation method for calculating the effluent Cr(VI) concentration and the PAC dose was developed for the CTA process, and the validity of the method was confirmed by a deviation of <5%. Copyright © 2017. Published by Elsevier Ltd.

  9. Hospital management contracts: institutional and community perspectives.

    PubMed Central

    Wheeler, J R; Zuckerman, H S

    1984-01-01

    Previous studies have shown that external management by contract can improve the performance of managed hospitals. This article presents a conceptual framework which develops specific hypotheses concerning improved hospital operating efficiency, increased ability to meet hospital objectives, and increased ability to meet community objectives. Next, changes in the process and structure of management under contractual arrangements, based on observations from two not-for-profit hospital systems, are described. Finally, the effects of these management changes over time on hospital and community objectives are presented. These effects suggest progressive stages in the development of management contracts. The first stage focuses on stabilizing hospital financial performance. Stage two involves recruitment and retention efforts to secure necessary personnel. In the third stage, attention shifts to strategic planning and marketing. PMID:6490378

  10. Stage-structured matrix models for organisms with non-geometric development times

    Treesearch

    Andrew Birt; Richard M. Feldman; David M. Cairns; Robert N. Coulson; Maria Tchakerian; Weimin Xi; James M. Guldin

    2009-01-01

    Matrix models have been used to model population growth of organisms for many decades. They are popular because of both their conceptual simplicity and their computational efficiency. For some types of organisms they are relatively accurate in predicting population growth; however, for others the matrix approach does not adequately model...

  11. A C-band 55% PAE high gain two-stage power amplifier based on AlGaN/GaN HEMT

    NASA Astrophysics Data System (ADS)

    Zheng, Jia-Xin; Ma, Xiao-Hua; Lu, Yang; Zhao, Bo-Chao; Zhang, Hong-He; Zhang, Meng; Cao, Meng-Yi; Hao, Yue

    2015-10-01

    A C-band high efficiency and high gain two-stage power amplifier based on AlGaN/GaN high electron mobility transistor (HEMT) is designed and measured in this paper. The input and output impedances for the optimum power-added efficiency (PAE) are determined at the fundamental and 2nd harmonic frequency (f0 and 2f0). The harmonic manipulation networks are designed both in the driver stage and the power stage which manipulate the second harmonic to a very low level within the operating frequency band. Then the inter-stage matching network and the output power combining network are calculated to achieve a low insertion loss. So the PAE and the power gain is greatly improved. In an operation frequency range of 5.4 GHz-5.8 GHz in CW mode, the amplifier delivers a maximum output power of 18.62 W, with a PAE of 55.15% and an associated power gain of 28.7 dB, which is an outstanding performance. Project supported by the National Key Basic Research Program of China (Grant No. 2011CBA00606), Program for New Century Excellent Talents in University, China (Grant No. NCET-12-0915), and the National Natural Science Foundation of China (Grant No. 61334002).

  12. Staged Inference using Conditional Deep Learning for energy efficient real-time smart diagnosis.

    PubMed

    Parsa, Maryam; Panda, Priyadarshini; Sen, Shreyas; Roy, Kaushik

    2017-07-01

    Recent progress in biosensor technology and wearable devices has created a formidable opportunity for remote healthcare monitoring systems as well as real-time diagnosis and disease prevention. The use of data mining techniques is indispensable for analysis of the large pool of data generated by the wearable devices. Deep learning is among the promising methods for analyzing such data for healthcare applications and disease diagnosis. However, the conventional deep neural networks are computationally intensive and it is impractical to use them in real-time diagnosis with low-powered on-body devices. We propose Staged Inference using Conditional Deep Learning (SICDL), as an energy efficient approach for creating healthcare monitoring systems. For smart diagnostics, we observe that all diagnoses are not equally challenging. The proposed approach thus decomposes the diagnoses into preliminary analysis (such as healthy vs unhealthy) and detailed analysis (such as identifying the specific type of cardio disease). The preliminary diagnosis is conducted real-time with a low complexity neural network realized on the resource-constrained on-body device. The detailed diagnosis requires a larger network that is implemented remotely in cloud and is conditionally activated only for detailed diagnosis (unhealthy individuals). We evaluated the proposed approach using available physiological sensor data from Physionet databases, and achieved 38% energy reduction in comparison to the conventional deep learning approach.

  13. Two-Stage, Integrated, Geothermal-CO2 Storage Reservoirs: An Approach for Sustainable Energy Production, CO2-Sequestration Security, and Reduced Environmental Risk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buscheck, T A; Chen, M; Sun, Y

    2012-02-02

    We introduce a hybrid two-stage energy-recovery approach to sequester CO{sub 2} and produce geothermal energy at low environmental risk and low cost by integrating geothermal production with CO{sub 2} capture and sequestration (CCS) in saline, sedimentary formations. Our approach combines the benefits of the approach proposed by Buscheck et al. (2011b), which uses brine as the working fluid, with those of the approach first suggested by Brown (2000) and analyzed by Pruess (2006), using CO{sub 2} as the working fluid, and then extended to saline-formation CCS by Randolph and Saar (2011a). During stage one of our hybrid approach, formation brine,more » which is extracted to provide pressure relief for CO{sub 2} injection, is the working fluid for energy recovery. Produced brine is applied to a consumptive beneficial use: feedstock for fresh water production through desalination, saline cooling water, or make-up water to be injected into a neighboring reservoir operation, such as in Enhanced Geothermal Systems (EGS), where there is often a shortage of a working fluid. For stage one, it is important to find economically feasible disposition options to reduce the volume of brine requiring reinjection in the integrated geothermal-CCS reservoir (Buscheck et al. 2012a). During stage two, which begins as CO{sub 2} reaches the production wells; coproduced brine and CO{sub 2} are the working fluids. We present preliminary reservoir engineering analyses of this approach, using a simple conceptual model of a homogeneous, permeable CO{sub 2} storage formation/geothermal reservoir, bounded by relatively impermeable sealing units. We assess both the CO{sub 2} sequestration capacity and geothermal energy production potential as a function of well spacing between CO{sub 2} injectors and brine/CO{sub 2} producers for various well patterns and for a range of subsurface conditions.« less

  14. Demonstration of Isothermal Compressed Air Energy Storage to Support Renewable Energy Production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bollinger, Benjamin

    This project develops and demonstrates a megawatt (MW)-scale Energy Storage System that employs compressed air as the storage medium. An isothermal compressed air energy storage (ICAES TM) system rated for 1 MW or more will be demonstrated in a full-scale prototype unit. Breakthrough cost-effectiveness will be achieved through the use of proprietary methods for isothermal gas cycling and staged gas expansion implemented using industrially mature, readily-available components.The ICAES approach uses an electrically driven mechanical system to raise air to high pressure for storage in low-cost pressure vessels, pipeline, or lined-rock cavern (LRC). This air is later expanded through the samemore » mechanical system to drive the electric motor as a generator. The approach incorporates two key efficiency-enhancing innovations: (1) isothermal (constant temperature) gas cycling, which is achieved by mixing liquid with air (via spray or foam) to exchange heat with air undergoing compression or expansion; and (2) a novel, staged gas-expansion scheme that allows the drivetrain to operate at constant power while still allowing the stored gas to work over its entire pressure range. The ICAES system will be scalable, non-toxic, and cost-effective, making it suitable for firming renewables and for other grid applications.« less

  15. A new learning strategy for the two-time-scale neural controller with its application to the tracking control of rigid arms

    NASA Technical Reports Server (NTRS)

    Cheng, W.; Wen, J. T.

    1992-01-01

    A novel fast learning rule with fast weight identification is proposed for the two-time-scale neural controller, and a two-stage learning strategy is developed for the proposed neural controller. The results of the stability analysis show that both the tracking error and the fast weight error will be uniformly bounded and converge to a bounded region which depends only on the accuracy of the slow learning if the system is sufficiently excited. The efficiency of the two-stage learning is also demonstrated by a simulation of a two-link arm.

  16. Mach 6.5 air induction system design for the Beta 2 two-stage-to-orbit booster vehicle

    NASA Technical Reports Server (NTRS)

    Midea, Anthony C.

    1991-01-01

    A preliminary, two-dimensional, mixed compression air induction system is designed for the Beta II Two Stage to Orbit booster vehicle to minimize installation losses and efficiently deliver the required airflow. Design concepts, such as an external isentropic compression ramp and a bypass system were developed and evaluated for performance benefits. The design was optimized by maximizing installed propulsion/vehicle system performance. The resulting system design operating characteristics and performance are presented. The air induction system design has significantly lower transonic drag than similar designs and only requires about 1/3 of the bleed extraction. In addition, the design efficiently provides the integrated system required airflow, while maintaining adequate levels of total pressure recovery. The excellent performance of this highly integrated air induction system is essential for the successful completion of the Beta II booster vehicle mission.

  17. Two-stage fan. 3: Data and performance with rotor tip casing treatment, uniform and distorted inlet flows

    NASA Technical Reports Server (NTRS)

    Burger, G. D.; Hodges, T. R.; Keenan, M. J.

    1975-01-01

    A two stage fan with a 1st-stage rotor design tip speed of 1450 ft/sec, a design pressure ratio of 2.8, and corrected flow of 184.2 lbm/sec was tested with axial skewed slots in the casings over the tips of both rotors. The variable stagger stators were set in the nominal positions. Casing treatment improved stall margin by nine percentage points at 70 percent speed but decreased stall margin, efficiency, and flow by small amounts at design speed. Treatment improved first stage performance at low speed only and decreased second stage performance at all operating conditions. Casing treatment did not affect the stall line with tip radially distorted flow but improved stall margin with circumferentially distorted flow. Casing treatment increased the attenuation for both types of inlet flow distortion.

  18. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    PubMed

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  19. Evaluation of antibiofilm effect of benzalkonium chloride, iodophore and sodium hypochlorite against biofilm of Pseudomonas aeruginosa of dairy origin.

    PubMed

    Pagedar, Ankita; Singh, Jitender

    2015-08-01

    The present study was undertaken with objectives of; a) to investigate and compare Pseudomonas aeruginosa isolates from two dairies for biofilm formation potential and, b) to compares three common biocides for biofilm eradication efficiencies. Amongst the isolates from commercial dairy, 70 % were strong and/or moderate biofilm former in comparison to 40 % isolates from small scale dairy. All isolates, irrespective of source, exhibited higher susceptibility to biocides in planktonic stage than in biofilm. Antibiofilm efficiencies of three biocides i.e. benzalkonium chloride, sodium hypochlorite and iodophore were determined in terms of their microbial biofilms eradicating concentration (MBEC). Our findings show that the three biocides were ineffective against preformed biofilms at recommended in-use concentrations. Biofilms were the most resistant to benzalkonium chloride and least against iodophore. A trend of decreasing MBECs was observed with extended contact time. The findings of present study warrant for a systematic approach for selecting types and concentrations of biocide for application as antibiofilm agent in food industry.

  20. Usability in product design--the importance and need for systematic assessment models in product development--Usa-Design Model (U-D) ©.

    PubMed

    Merino, Giselle Schmidt Alves Díaz; Teixeira, Clarissa Stefani; Schoenardie, Rodrigo Petry; Merino, Eugenio Andrés Diáz; Gontijo, Leila Amaral

    2012-01-01

    In product design, human factors are considered as an element of differentiation given that today's consumer demands are increasing. Safety, wellbeing, satisfaction, health, effectiveness, efficiency, and other aspects must be effectively incorporated into the product development process. This work proposes a usability assessment model that can be incorporated as an assessment tool. The methodological approach is settled in two stages. First a literature review focus specifically on usability and developing user-centred products. After this, a model of usability named Usa-Design (U-D©) is presented. Consisted of four phases: understanding the use context, pre-preliminary usability assessment (efficiency/effectiveness/satisfaction); assessment of usability principles and results, U-D© features are modular and flexible, allowing principles used in Phase 3 to be changed according to the needs and scenario of each situation. With qualitative/quantitative measurement scales of easy understanding and application, the model results are viable and applicable throughout all the product development process.

  1. A Phase III Trial Comparing Two Dose-dense, Dose-intensified Approaches (ETC and PM(Cb)) for Neoadjuvant Treatment of Patients With High-risk Early Breast Cancer (GeparOcto)

    ClinicalTrials.gov

    2017-07-10

    Tubular Breast Cancer Stage II; Tubular Breast Cancer Stage III; Mucinous Breast Cancer Stage II; Breast Cancer Female NOS; Invasive Ductal Breast Cancer; HER2 Positive Breast Cancer; Inflammatory Breast Cancer

  2. Monitoring of waste disposal in deep geological formations

    NASA Astrophysics Data System (ADS)

    German, V.; Mansurov, V.

    2003-04-01

    In the paper application of kinetic approach for description of rock failure process and waste disposal microseismic monitoring is advanced. On base of two-stage model of failure process the capability of rock fracture is proved. The requests to monitoring system such as real time mode of data registration and processing and its precision range are formulated. The method of failure nuclei delineation in a rock masses is presented. This method is implemented in a software program for strong seismic events forecasting. It is based on direct use of the fracture concentration criterion. The method is applied to the database of microseismic events of the North Ural Bauxite Mine. The results of this application, such as: efficiency, stability, possibility of forecasting rockburst are discussed.

  3. Quantum-engineered interband cascade photovoltaic devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Razeghi, Manijeh; Tournié, Eric; Brown, Gail J.

    2013-12-18

    Quantum-engineered multiple stage photovoltaic (PV) devices are explored based on InAs/GaSb/AlSb interband cascade (IC) structures. These ICPV devices employ multiple discrete absorbers that are connected in series by widebandgap unipolar barriers using type-II heterostructure interfaces for facilitating carrier transport between cascade stages similar to IC lasers. The discrete architecture is beneficial for improving the collection efficiency and for spectral splitting by utilizing absorbers with different bandgaps. As such, the photo-voltages from each individual cascade stage in an ICPV device add together, creating a high overall open-circuit voltage, similar to conventional multi-junction tandem solar cells. Furthermore, photo-generated carriers can be collectedmore » with nearly 100% efficiency in each stage. This is because the carriers travel over only a single cascade stage, designed to be shorter than a typical diffusion length. The approach is of significant importance for operation at high temperatures where the diffusion length is reduced. Here, we will present our recent progress in the study of ICPV devices, which includes the demonstration of ICPV devices at room temperature and above with narrow bandgaps (e.g. 0.23 eV) and high open-circuit voltages. © (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.« less

  4. Chlorpyrifos degradation in a biomixture of biobed at different maturity stages.

    PubMed

    Tortella, G R; Rubilar, O; Castillo, M d P; Cea, M; Mella-Herrera, R; Diez, M C

    2012-06-01

    The biomixture is a principal element controlling the degradation efficacy of the biobed. The maturity of the biomixture used in the biobed affects its overall performance of the biobed, but this is not well studied yet. The aim of this research was to evaluate the effect of using a typical composition of Swedish biomixture at different maturity stages on the degradation of chlorpyrifos. Tests were made using biomixture at three maturity stages: 0 d (BC0), 15 d (BC15) and 30 d (BC30); chlorpyrifos was added to the biobeds at final concentration of 200, 320 and 480 mg kg(-1). Chlorpyrifos degradation in the biomixture was monitored over time. Formation of TCP (3,5,6-trichloro-2-pyrinidol) was also quantified, and hydrolytic and phenoloxidase activities measured. The biomixture efficiently degraded chlorpyrifos (degradation efficiency >50%) in all the evaluated maturity stages. However, chlorpyrifos degradation decreased with increasing concentrations of the pesticide. TCP formation occurred in all biomixtures, but a major accumulation was observed in BC30. Significant differences were found in both phenoloxidase and hydrolytic activities in the three maturity stages of biomixture evaluated. Also, these two biological activities were affected by the increase in pesticide concentration. In conclusion, our results demonstrated that chlorpyrifos can be degraded efficiently in all the evaluated maturity stages. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Design of Center-TRACON Automation System

    NASA Technical Reports Server (NTRS)

    Erzberger, Heinz; Davis, Thomas J.; Green, Steven

    1993-01-01

    A system for the automated management and control of terminal area traffic, referred to as the Center-TRACON Automation System (CTAS), is being developed at NASA Ames Research Center. In a cooperative program, NASA and FAA have efforts underway to install and evaluate the system at the Denver area and Dallas/Ft. Worth area air traffic control facilities. This paper will review CTAS architecture, and automation functions as well as the integration of CTAS into the existing operational system. CTAS consists of three types of integrated tools that provide computer-generated advisories for both en-route and terminal area controllers to guide them in managing and controlling arrival traffic efficiently. One tool, the Traffic Management Advisor (TMA), generates runway assignments, landing sequences and landing times for all arriving aircraft, including those originating from nearby feeder airports. TMA also assists in runway configuration control and flow management. Another tool, the Descent Advisor (DA), generates clearances for the en-route controllers handling arrival flows to metering gates. The DA's clearances ensure fuel-efficient and conflict free descents to the metering gates at specified crossing times. In the terminal area, the Final Approach Spacing Tool (FAST) provides heading and speed advisories that help controllers produce an accurately spaced flow of aircraft on the final approach course. Data bases consisting of several hundred aircraft performance models, airline preferred operational procedures, and a three dimensional wind model support the operation of CTAS. The first component of CTAS, the Traffic Management Advisor, is being evaluated at the Denver TRACON and the Denver Air Route Traffic Control Center. The second component, the Final Approach Spacing Tool, will be evaluated in several stages at the Dallas/Fort Worth Airport beginning in October 1993. An initial stage of the Descent Advisor tool is being prepared for testing at the Denver Center in late 1994. Operational evaluations of all three integrated CTAS tools are expected to begin at the two field sites in 1995.

  6. Investigation of flow in axial turbine stage without shroud-seal

    NASA Astrophysics Data System (ADS)

    Straka, Petr; Němec, Martin; Jelínek, Thomáš

    2015-05-01

    This article deals with investigation of the influence of the radial gaps on the efficiency of the axial turbine stage. The investigation was carried out for the axial stage of the low-power turbine with the drum-type rotor without the shroud. In this configuration the flow through the radial gap under the hub-end of the stator blades and above the tip-end of the rotor blades leads to generation of the strong secondary flows, which decrease the efficiency of the stage. This problem was studied by experiment as well as by numerical modelling. The experiment was performed on the test rig equipped with the water brake dynamometer, torque meter and rotatable stator together with the linear probe manipulator. Numerical modelling was carried out for both the steady flow using the "mixing plane" interface and the unsteady flow using the "sliding mesh" interface between the stator and rotor wheels. The influence of the radial gap was studied in two configuration a) positive and b) negative overlapping of the tip-ends of the rotor blades. The efficiency of the axial stage in dependence on the expansion ratio, velocity ratio and the configuration as well as the details of the flow fields are presented in this paper.

  7. A Decade of Experience With the Primary Pull-Through for Hirschsprung Disease in the Newborn Period

    PubMed Central

    Teitelbaum, Daniel H.; Cilley, Robert E.; Sherman, Neil J.; Bliss, David; Uitvlugt, Neal D.; Renaud, Elizabeth J.; Kirstioglu, Irfan; Bengston, Tamara; Coran, Arnold G.

    2000-01-01

    Objective To determine whether use of a primary pull-through would result in equivalent perioperative and long-term complications compared with the two-stage approach. Summary Background Data During the past decade, the authors have advanced the use of a primary pull-through for Hirschsprung disease in the newborn, and preliminary results have suggested excellent outcomes. Methods From May 1989 through September 1999, 78 infants underwent a primary endorectal pull-through (ERPT) procedure at four pediatric surgical sites. Data were collected from medical records and a parental telephone interview (if the child was older than 3 years) to assess stooling patterns. A similar group of patients treated in a two-stage fashion served as a historical control. Results Mean age at the time of ERPT was 17.8 days of life. Comparing primary ERPT with a two-stage approach showed a trend toward a higher incidence of enterocolitis in the primary ERPT group compared with those with a two-stage approach (42.0% vs. 22.0%). Other complications were either lower in the primary ERPT group or similar, including rate of soiling and development of a bowel obstruction. Median number of stools per day was two at a mean follow-up of 4.1 ± 2.5 years, with 83% having three or fewer stools per day. Conclusions Performance of a primary ERPT for Hirschsprung disease in the newborn is an excellent option. Results were comparable to those of the two-stage procedure. The greater incidence of enterocolitis appears to be due to a lower threshold in diagnosing enterocolitis in more recent years. PMID:10973387

  8. Metal fractionation in olive oil and urban sewage sludges using the three-stage BCR sequential extraction method and microwave single extractions.

    PubMed

    Pérez Cid, B; Fernández Alborés, A; Fernández Gómez, E; Faliqé López, E

    2001-08-01

    The conventional three-stage BCR sequential extraction method was employed for the fractionation of heavy metals in sewage sludge samples from an urban wastewater treatment plant and from an olive oil factory. The results obtained for Cu, Cr, Ni, Pb and Zn in these samples were compared with those attained by a simplified extraction procedure based on microwave single extractions and using the same reagents as employed in each individual BCR fraction. The microwave operating conditions in the single extractions (heating time and power) were optimized for all the metals studied in order to achieve an extraction efficiency similar to that of the conventional BCR procedure. The measurement of metals in the extracts was carried out by flame atomic absorption spectrometry. The results obtained in the first and third fractions by the proposed procedure were, for all metals, in good agreement with those obtained using the BCR sequential method. Although in the reducible fraction the extraction efficiency of the accelerated procedure was inferior to that of the conventional method, the overall metals leached by both microwave single and sequential extractions were basically the same (recoveries between 90.09 and 103.7%), except for Zn in urban sewage sludges where an extraction efficiency of 87% was achieved. Chemometric analysis showed a good correlation between the results given by the two extraction methodologies compared. The application of the proposed approach to a certified reference material (CRM-601) also provided satisfactory results in the first and third fractions, as it was observed for the sludge samples analysed.

  9. Numerical analysis of flow interaction of turbine system in two-stage turbocharger of internal combustion engine

    NASA Astrophysics Data System (ADS)

    Liu, Y. B.; Zhuge, W. L.; Zhang, Y. J.; Zhang, S. Y.

    2016-05-01

    To reach the goal of energy conservation and emission reduction, high intake pressure is needed to meet the demand of high power density and high EGR rate for internal combustion engine. Present power density of diesel engine has reached 90KW/L and intake pressure ratio needed is over 5. Two-stage turbocharging system is an effective way to realize high compression ratio. Because turbocharging system compression work derives from exhaust gas energy. Efficiency of exhaust gas energy influenced by design and matching of turbine system is important to performance of high supercharging engine. Conventional turbine system is assembled by single-stage turbocharger turbines and turbine matching is based on turbine MAP measured on test rig. Flow between turbine system is assumed uniform and value of outlet physical quantities of turbine are regarded as the same as ambient value. However, there are three-dimension flow field distortion and outlet physical quantities value change which will influence performance of turbine system as were demonstrated by some studies. For engine equipped with two-stage turbocharging system, optimization of turbine system design will increase efficiency of exhaust gas energy and thereby increase engine power density. However flow interaction of turbine system will change flow in turbine and influence turbine performance. To recognize the interaction characteristics between high pressure turbine and low pressure turbine, flow in turbine system is modeled and simulated numerically. The calculation results suggested that static pressure field at inlet to low pressure turbine increases back pressure of high pressure turbine, however efficiency of high pressure turbine changes little; distorted velocity field at outlet to high pressure turbine results in swirl at inlet to low pressure turbine. Clockwise swirl results in large negative angle of attack at inlet to rotor which causes flow loss in turbine impeller passages and decreases turbine efficiency. However negative angle of attack decreases when inlet swirl is anti-clockwise and efficiency of low pressure turbine can be increased by 3% compared to inlet condition of clockwise swirl. Consequently flow simulation and analysis are able to aid in figuring out interaction mechanism of turbine system and optimizing turbine system design.

  10. Factors affecting the surgical approach and timing of bilateral adrenalectomy.

    PubMed

    Lan, Billy Y; Taskin, Halit E; Aksoy, Erol; Birsen, Onur; Dural, Cem; Mitchell, Jamie; Siperstein, Allan; Berber, Eren

    2015-07-01

    Laparoscopic adrenalectomy has gained widespread acceptance. However, the optimal surgical approach to laparoscopic bilateral adrenalectomy has not been clearly defined. The aim of this study is to analyze the patient and intraoperative factors affecting the feasibility and outcome of different surgical approaches to define an algorithm for bilateral adrenalectomy. Between 2000 and 2013, all patients who underwent bilateral adrenalectomy at a single institution were selected for retrospective analysis. Patient factors, surgical approach, operative outcomes, and complications were analyzed. From 2000 to 2013, 28 patients underwent bilateral adrenalectomy. Patient diagnoses included Cushing's disease (n = 19), pheochromocytoma (n = 7), and adrenal metastasis (n = 2). Of these 28 patients, successful laparoscopic adrenalectomy was performed in all but 2 patients. Twenty-three out of the 26 adrenalectomies were completed in a single stage, while three were performed as a staged approach due to deterioration in intraoperative respiratory status in two patients and patient body habitus in one. Of the adrenalectomies completed using the minimally invasive approach, a posterior retroperitoneal (PR) approach was performed in 17 patients and lateral transabdominal (LT) approach in 9 patients. Patients who underwent a LT approach had higher BMI, larger tumor size, and other concomitant intraabdominal pathology. Hospital stay for laparoscopic adrenalectomy was 3.5 days compared to 5 and 12 days for the two open cases. There were no 30-day hospital mortality and 5 patients had minor complications for the entire cohort. A minimally invasive operation is feasible in 93% of patients undergoing bilateral adrenalectomy with 65% of adrenalectomies performed using the PR approach. Indications for the LT approach include morbid obesity, tumor size >6 cm, and other concomitant intraabdominal pathology. Single-stage adrenalectomies are feasible in most patients, with prolonged operative time causing respiratory instability being the main indication for a staged approach.

  11. Efficiency of a new bioaerosol sampler in sampling Betula pollen for antigen analyses.

    PubMed

    Rantio-Lehtimäki, A; Kauppinen, E; Koivikko, A

    1987-01-01

    A new bioaerosol sampler consisting of Liu-type atmospheric aerosol sampling inlet, coarse particle inertial impactor, two-stage high-efficiency virtual impactor (aerodynamic particle sizes respectively in diameter: greater than or equal to 8 microns, 8-2.5 microns, and 2.5 microns; sampling on filters) and a liquid-cooled condenser was designed, fabricated and field-tested in sampling birch (Betula) pollen grains and smaller particles containing Betula antigens. Both microscopical (pollen counts) and immunochemical (enzyme-linked immunosorbent assay) analyses of each stage were carried out. The new sampler was significantly more efficient than Burkard trap e.g. in sampling particles of Betula pollen size (ca. 25 microns in diameter). This was prominent during pollen peak periods (e.g. May 19th, 1985, in the virtual impactor 9482 and in the Burkard trap 2540 Betula p.g. X m-3 of air). Betula antigens were detected also in filter stages where no intact pollen grains were found; in the condenser unit the antigen concentrations instead were very low.

  12. Core compressor exit stage study, 2

    NASA Technical Reports Server (NTRS)

    Behlke, R. F.; Burdsall, E. A.; Canal, E., Jr.; Korn, N. D.

    1979-01-01

    A total of two three-stage compressors were designed and tested to determine the effects of aspect ratio on compressor performance. The first compressor was designed with an aspect ratio of 0.81; the other, with an aspect ratio of 1.22. Both compressors had a hub-tip ratio of 0.915, representative of the rear stages of a core compressor, and both were designed to achieve a 15.0% surge margin at design pressure ratios of 1.357 and 1.324, respectively, at a mean wheel speed of 167 m/sec. At design speed the 0.81 aspect ratio compressor achieved a pressure ratio of 1.346 at a corrected flow of 4.28 kg/sec and an adiabatic efficiency of 86.1%. The 1.22 aspect ratio design achieved a pressure ratio of 1.314 at 4.35 kg/sec flow and 87.0% adiabatic efficiency. Surge margin to peak efficiency was 24.0% with the lower aspect ratio blading, compared with 12.4% with the higher aspect ratio blading.

  13. A decision science approach for integrating social science in climate and energy solutions

    NASA Astrophysics Data System (ADS)

    Wong-Parodi, Gabrielle; Krishnamurti, Tamar; Davis, Alex; Schwartz, Daniel; Fischhoff, Baruch

    2016-06-01

    The social and behavioural sciences are critical for informing climate- and energy-related policies. We describe a decision science approach to applying those sciences. It has three stages: formal analysis of decisions, characterizing how well-informed actors should view them; descriptive research, examining how people actually behave in such circumstances; and interventions, informed by formal analysis and descriptive research, designed to create attractive options and help decision-makers choose among them. Each stage requires collaboration with technical experts (for example, climate scientists, geologists, power systems engineers and regulatory analysts), as well as continuing engagement with decision-makers. We illustrate the approach with examples from our own research in three domains related to mitigating climate change or adapting to its effects: preparing for sea-level rise, adopting smart grid technologies in homes, and investing in energy efficiency for office buildings. The decision science approach can facilitate creating climate- and energy-related policies that are behaviourally informed, realistic and respectful of the people whom they seek to aid.

  14. [Characteristics of phosphorus uptake and use efficiency of rice with high yield and high phosphorus use efficiency].

    PubMed

    Li, Li; Zhang, Xi-Zhou; Li, Tinx-Xuan; Yu, Hai-Ying; Ji, Lin; Chen, Guang-Deng

    2014-07-01

    A total of twenty seven middle maturing rice varieties as parent materials were divided into four types based on P use efficiency for grain yield in 2011 by field experiment with normal phosphorus (P) application. The rice variety with high yield and high P efficiency was identified by pot experiment with normal and low P applications, and the contribution rates of various P efficiencies to yield were investigated in 2012. There were significant genotype differences in yield and P efficiency of the test materials. GRLu17/AiTTP//Lu17_2 (QR20) was identified as a variety with high yield and high P efficiency, and its yields at the low and normal rates of P application were 1.96 and 1.92 times of that of Yuxiang B, respectively. The contribution rate of P accumulation to yield was greater than that of P grain production efficiency and P harvest index across field and pot experiments. The contribution rates of P accumulation and P grain production efficiency to yield were not significantly different under the normal P condition, whereas obvious differences were observed under the low P condition (66.5% and 26.6%). The minimal contribution to yield was P harvest index (11.8%). Under the normal P condition, the contribution rates of P accumulation to yield and P harvest index were the highest at the jointing-heading stage, which were 93.4% and 85.7%, respectively. In addition, the contribution rate of P accumulation to grain production efficiency was 41.8%. Under the low P condition, the maximal contribution rates of P accumulation to yield and grain production efficiency were observed at the tillering-jointing stage, which were 56.9% and 20.1% respectively. Furthermore, the contribution rate of P accumulation to P harvest index was 16.0%. The yield, P accumulation, and P harvest index of QR20 significantly increased under the normal P condition by 20.6%, 18.1% and 18.2% respectively compared with that in the low P condition. The rank of the contribution rates of P efficiencies to the yield was in order of P uptake efficiency > P utilization efficiency > P transportation efficiency. The greatest contribution rate of P accumulation to the yield was noticed at the jointing-heading stage with the normal P application while it reached the maximal value at the tillering-jointing stage with the low P application. Therefore, these two stages may be the critical periods to coordinate high yield and high P efficiency in rice.

  15. Comparison of various microbial inocula for the efficient anaerobic digestion of Laminaria hyperborea.

    PubMed

    Sutherland, Alastair D; Varela, Joao C

    2014-01-23

    The hydrolysis of seaweed polysaccharides is the rate limiting step in anaerobic digestion (AD) of seaweeds. Seven different microbial inocula and a mixture of these (inoculum 8) were therefore compared in triplicate, each grown over four weeks in static culture for the ability to degrade Laminaria hyperborea seaweed and produce methane through AD. All the inocula could degrade L. hyperborea and produce methane to some extent. However, an inoculum of slurry from a human sewage anaerobic digester, one of rumen contents from seaweed-eating North Ronaldsay sheep and inoculum 8 used most seaweed volatile solids (VS) (means ranged between 59 and 68% used), suggesting that these each had efficient seaweed polysaccharide digesting bacteria. The human sewage inoculum, an inoculum of anaerobic marine mud mixed with rotting seaweed and inoculum 8 all developed to give higher volumes of methane (means between 41 and 62.5 ml g-1 of seaweed VS by week four) ,compared to other inocula (means between 3.5 and 27.5 ml g-1 VS). Inoculum 8 also gave the highest acetate production (6.5 mmol g-1 VS) in a single-stage fermenter AD system and produced most methane (8.4 mL mmol acetate-1) in phase II of a two-stage AD system. Overall inoculum 8 was found to be the most efficient inoculum for AD of seaweed. The study therefore showed that selection and inclusion of efficient polysaccharide hydrolysing bacteria and methanogenic archaea in an inoculum offer increased methane productivity in AD of L. hyperborea. This inoculum will now being tested in larger scale (10L) continuously stirred reactors optimised for feed rate and retention time to determine maximum methane production under single-stage and two-stage AD systems.

  16. Comparison of various microbial inocula for the efficient anaerobic digestion of Laminaria hyperborea

    PubMed Central

    2014-01-01

    Background The hydrolysis of seaweed polysaccharides is the rate limiting step in anaerobic digestion (AD) of seaweeds. Seven different microbial inocula and a mixture of these (inoculum 8) were therefore compared in triplicate, each grown over four weeks in static culture for the ability to degrade Laminaria hyperborea seaweed and produce methane through AD. Results All the inocula could degrade L. hyperborea and produce methane to some extent. However, an inoculum of slurry from a human sewage anaerobic digester, one of rumen contents from seaweed-eating North Ronaldsay sheep and inoculum 8 used most seaweed volatile solids (VS) (means ranged between 59 and 68% used), suggesting that these each had efficient seaweed polysaccharide digesting bacteria. The human sewage inoculum, an inoculum of anaerobic marine mud mixed with rotting seaweed and inoculum 8 all developed to give higher volumes of methane (means between 41 and 62.5 ml g-1 of seaweed VS by week four) ,compared to other inocula (means between 3.5 and 27.5 ml g-1 VS). Inoculum 8 also gave the highest acetate production (6.5 mmol g-1 VS) in a single-stage fermenter AD system and produced most methane (8.4 mL mmol acetate-1) in phase II of a two-stage AD system. Conclusions Overall inoculum 8 was found to be the most efficient inoculum for AD of seaweed. The study therefore showed that selection and inclusion of efficient polysaccharide hydrolysing bacteria and methanogenic archaea in an inoculum offer increased methane productivity in AD of L. hyperborea. This inoculum will now being tested in larger scale (10L) continuously stirred reactors optimised for feed rate and retention time to determine maximum methane production under single-stage and two-stage AD systems. PMID:24456825

  17. Comparison of geothermal power conversion cycles

    NASA Technical Reports Server (NTRS)

    Elliott, D. G.

    1976-01-01

    Geothermal power conversion cycles are compared with respect to recovery of the available wellhead power. The cycles compared are flash steam, in which steam turbines are driven by steam separated from one or more flash stages; binary, in which heat is transferred from the brine to an organic turbine cycle; flash binary, in which heat is transferred from flashed steam to an organic turbine cycle; and dual steam, in which two-phase expanders are driven by the flashing steam-brine mixture and steam turbines by the separated steam. Expander efficiencies assumed are 0.7 for steam turbines, 0.8 for organic turbines, and 0.6 for two-phase expanders. The fraction of available wellhead power delivered by each cycle is found to be about the same at all brine temperatures: 0.65 with one stage and 0.7 with four stages for dual stream; 0.4 with one stage and 0.6 with four stages for flash steam; 0.5 for binary; and 0.3 with one stage and 0.5 with four stages for flash binary.

  18. Rule-Mining for the Early Prediction of Chronic Kidney Disease Based on Metabolomics and Multi-Source Data

    PubMed Central

    Luck, Margaux; Bertho, Gildas; Bateson, Mathilde; Karras, Alexandre; Yartseva, Anastasia; Thervet, Eric

    2016-01-01

    1H Nuclear Magnetic Resonance (NMR)-based metabolic profiling is very promising for the diagnostic of the stages of chronic kidney disease (CKD). Because of the high dimension of NMR spectra datasets and the complex mixture of metabolites in biological samples, the identification of discriminant biomarkers of a disease is challenging. None of the widely used chemometric methods in NMR metabolomics performs a local exhaustive exploration of the data. We developed a descriptive and easily understandable approach that searches for discriminant local phenomena using an original exhaustive rule-mining algorithm in order to predict two groups of patients: 1) patients having low to mild CKD stages with no renal failure and 2) patients having moderate to established CKD stages with renal failure. Our predictive algorithm explores the m-dimensional variable space to capture the local overdensities of the two groups of patients under the form of easily interpretable rules. Afterwards, a L2-penalized logistic regression on the discriminant rules was used to build predictive models of the CKD stages. We explored a complex multi-source dataset that included the clinical, demographic, clinical chemistry, renal pathology and urine metabolomic data of a cohort of 110 patients. Given this multi-source dataset and the complex nature of metabolomic data, we analyzed 1- and 2-dimensional rules in order to integrate the information carried by the interactions between the variables. The results indicated that our local algorithm is a valuable analytical method for the precise characterization of multivariate CKD stage profiles and as efficient as the classical global model using chi2 variable section with an approximately 70% of good classification level. The resulting predictive models predominantly identify urinary metabolites (such as 3-hydroxyisovalerate, carnitine, citrate, dimethylsulfone, creatinine and N-methylnicotinamide) as relevant variables indicating that CKD significantly affects the urinary metabolome. In addition, the simple knowledge of the concentration of urinary metabolites classifies the CKD stage of the patients correctly. PMID:27861591

  19. Terrestrial photovoltaic collector technology trends

    NASA Technical Reports Server (NTRS)

    Shimada, K.; Costogue, E.

    1984-01-01

    Following the path of space PV collector development in its early stages, terrestrial PV technologies based upon single-crystal silicon have matured rapidly. Currently, terrestrial PV cells with efficiencies approaching space cell efficiencies are being fabricated into modules at a fraction of the space PV module cost. New materials, including CuInSe2 and amorphous silicon, are being developed for lowering the cost, and multijunction materials for achieving higher efficiency. Large grid-interactive, tracking flat-plate power systems and concentrator PV systems totaling about 10 MW, are already in operation. Collector technology development both flat-plate and concentrator, will continue under an extensive government and private industry partnership.

  20. Falling Leaves Inspired ZnO Nanorods-Nanoslices Hierarchical Structure for Implant Surface Modification with Two Stage Releasing Features.

    PubMed

    Liao, Hang; Miao, Xinxin; Ye, Jing; Wu, Tianlong; Deng, Zhongbo; Li, Chen; Jia, Jingyu; Cheng, Xigao; Wang, Xiaolei

    2017-04-19

    Inspired from falling leaves, ZnO nanorods-nanoslices hierarchical structure (NHS) was constructed to modify the surfaces of two widely used implant materials: titanium (Ti) and tantalum (Ta), respectively. By which means, two-stage release of antibacterial active substances were realized to address the clinical importance of long-term broad-spectrum antibacterial activity. At early stages (within 48 h), the NHS exhibited a rapid releasing to kill the bacteria around the implant immediately. At a second stage (over 2 weeks), the NHS exhibited a slow releasing to realize long-term inhibition. The excellent antibacterial activity of ZnO NHS was confirmed once again by animal test in vivo. According to the subsequent experiments, the ZnO NHS coating exhibited the great advantage of high efficiency, low toxicity, and long-term durability, which could be a feasible manner to prevent the abuse of antibiotics on implant-related surgery.

  1. Bioactive Coating with Two-Layer Hierarchy of Relief Obtained by Sol-Gel Method with Shock Drying and Osteoblast Response of Its Structure.

    PubMed

    Zemtsova, Elena G; Arbenin, Andrei Y; Yudintceva, Natalia M; Valiev, Ruslan Z; Orekhov, Evgeniy V; Smirnov, Vladimir M

    2017-10-13

    In this work, we analyze the efficiency of the modification of the implant surface. This modification was reached by the formation of a two-level relief hierarchy by means of a sol-gel approach that included dip coating with subsequent shock drying. Using this method, we fabricated a nanoporous layer with micron-sized defects on the nanotitanium surface. The present work continues an earlier study by our group, wherein the effect of osteoblast-like cell adhesion acceleration was found. In the present paper, we give the results of more detailed evaluation of coating efficiency. Specifically, cytological analysis was performed that included the study of the marker levels of osteoblast-like cell differentiation. We found a significant increase in the activity of alkaline phosphatase at the initial incubation stage. This is very important for implantation, since such an effect assists the decrease in the induction time of implant engraftment. Moreover, osteopontin expression remains high for long expositions. This indicates a prolonged osteogenic effect in the coating. The results suggest the acceleration of the pre-implant area mineralization and, correspondingly, the potential use of the developed coatings for bone implantation.

  2. Bioactive Coating with Two-Layer Hierarchy of Relief Obtained by Sol-Gel Method with Shock Drying and Osteoblast Response of Its Structure

    PubMed Central

    Zemtsova, Elena G.; Arbenin, Andrei Y.; Valiev, Ruslan Z.; Orekhov, Evgeniy V.; Smirnov, Vladimir M.

    2017-01-01

    In this work, we analyze the efficiency of the modification of the implant surface. This modification was reached by the formation of a two-level relief hierarchy by means of a sol-gel approach that included dip coating with subsequent shock drying. Using this method, we fabricated a nanoporous layer with micron-sized defects on the nanotitanium surface. The present work continues an earlier study by our group, wherein the effect of osteoblast-like cell adhesion acceleration was found. In the present paper, we give the results of more detailed evaluation of coating efficiency. Specifically, cytological analysis was performed that included the study of the marker levels of osteoblast-like cell differentiation. We found a significant increase in the activity of alkaline phosphatase at the initial incubation stage. This is very important for implantation, since such an effect assists the decrease in the induction time of implant engraftment. Moreover, osteopontin expression remains high for long expositions. This indicates a prolonged osteogenic effect in the coating. The results suggest the acceleration of the pre-implant area mineralization and, correspondingly, the potential use of the developed coatings for bone implantation. PMID:29027930

  3. Interband cascade lasers with >40% continuous-wave wallplug efficiency at cryogenic temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Canedy, C. L.; Kim, C. S.; Merritt, C. D.

    2015-09-21

    Broad-area 10-stage interband cascade lasers (ICLs) emitting at λ = 3.0–3.2 μm are shown to maintain continuous-wave (cw) wallplug efficiencies exceeding 40% at temperatures up to 125 K, despite having a design optimized for operation at ambient and above. The cw threshold current density at 80 K is only 11 A/cm{sup 2} for a 2 mm cavity with anti-reflection/high-reflection coatings on the two facets. The external differential quantum efficiency for a 1-mm-long cavity with the same coatings is 70% per stage at 80 K, and still above 65% at 150 K. The results demonstrate that at cryogenic temperatures, where free carrier absorption losses are minimized, ICLs can convert electricalmore » to optical energy nearly as efficiently as the best specially designed intersubband-based quantum cascade lasers.« less

  4. Bioremediation of storage tank bottom sludge by using a two-stage composting system: Effect of mixing ratio and nutrients addition.

    PubMed

    Koolivand, Ali; Rajaei, Mohammad Sadegh; Ghanadzadeh, Mohammad Javad; Saeedi, Reza; Abtahi, Hamid; Godini, Kazem

    2017-07-01

    The effect of mixing ratio and nutrients addition on the efficiency of a two-stage composting system in removal of total petroleum hydrocarbons (TPH) from storage tank bottom sludge (STBS) was investigated. The system consisted of ten windrow piles as primary composting (PC) followed by four in-vessel reactors as secondary composting (SC). Various initial C/N/P and mixing ratios of STBS to immature compost (IC) were examined in the PC and SC for 12 and 6weeks, respectively. The removal rates of TPH in the two-stage system (93.72-95.24%) were higher than those in the single-stage one. Depending on the experiments, TPH biodegradation fitted to the first- and second-order kinetics with the rate constants of 0.051-0.334d -1 and 0.002-0.165gkg -1 d -1 , respectively. The bacteria identified were Pseudomonas sp., Bacillus sp., Klebsiella sp., Staphylococcus sp., and Proteus sp. The study verified that a two-stage composting system is effective in treating the STBS. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Generation of gene-modified goats targeting MSTN and FGF5 via zygote injection of CRISPR/Cas9 system

    PubMed Central

    Wang, Xiaolong; Yu, Honghao; Lei, Anmin; Zhou, Jiankui; Zeng, Wenxian; Zhu, Haijing; Dong, Zhiming; Niu, Yiyuan; Shi, Bingbo; Cai, Bei; Liu, Jinwang; Huang, Shuai; Yan, Hailong; Zhao, Xiaoe; Zhou, Guangxian; He, Xiaoling; Chen, Xiaoxu; Yang, Yuxin; Jiang, Yu; Shi, Lei; Tian, Xiue; Wang, Yongjun; Ma, Baohua; Huang, Xingxu; Qu, Lei; Chen, Yulin

    2015-01-01

    Recent advances in the study of the CRISPR/Cas9 system have provided a precise and versatile approach for genome editing in various species. However, the applicability and efficiency of this method in large animal models, such as the goat, have not been extensively studied. Here, by co-injection of one-cell stage embryos with Cas9 mRNA and sgRNAs targeting two functional genes (MSTN and FGF5), we successfully produced gene-modified goats with either one or both genes disrupted. The targeting efficiency of MSTN and FGF5 in cultured primary fibroblasts was as high as 60%, while the efficiency of disrupting MSTN and FGF5 in 98 tested animals was 15% and 21% respectively, and 10% for double gene modifications. The on- and off-target mutations of the target genes in fibroblasts, as well as in somatic tissues and testis of founder and dead animals, were carefully analyzed. The results showed that simultaneous editing of several sites was achieved in large animals, demonstrating that the CRISPR/Cas9 system has the potential to become a robust and efficient gene engineering tool in farm animals, and therefore will be critically important and applicable for breeding. PMID:26354037

  6. Structure optimisation by thermal cycling for the hydrophobic-polar lattice model of protein folding

    NASA Astrophysics Data System (ADS)

    Günther, Florian; Möbius, Arnulf; Schreiber, Michael

    2017-03-01

    The function of a protein depends strongly on its spatial structure. Therefore the transition from an unfolded stage to the functional fold is one of the most important problems in computational molecular biology. Since the corresponding free energy landscapes exhibit huge numbers of local minima, the search for the lowest-energy configurations is very demanding. Because of that, efficient heuristic algorithms are of high value. In the present work, we investigate whether and how the thermal cycling (TC) approach can be applied to the hydrophobic-polar (HP) lattice model of protein folding. Evaluating the efficiency of TC for a set of two- and three-dimensional examples, we compare the performance of this strategy with that of multi-start local search (MSLS) procedures and that of simulated annealing (SA). For this aim, we incorporated several simple but rather efficient modifications into the standard procedures: in particular, a strong improvement was achieved by also allowing energy conserving state modifications. Furthermore, the consideration of ensembles instead of single samples was found to greatly improve the efficiency of TC. In the framework of different benchmarks, for all considered HP sequences, we found TC to be far superior to SA, and to be faster than Wang-Landau sampling.

  7. Single phase bi-directional AC-DC converter with reduced passive components size and common mode electro-magnetic interference

    DOEpatents

    Mi, Chris; Li, Siqi

    2017-01-31

    A bidirectional AC-DC converter is presented with reduced passive component size and common mode electro-magnetic interference. The converter includes an improved input stage formed by two coupled differential inductors, two coupled common and differential inductors, one differential capacitor and two common mode capacitors. With this input structure, the volume, weight and cost of the input stage can be reduced greatly. Additionally, the input current ripple and common mode electro-magnetic interference can be greatly attenuated, so lower switching frequency can be adopted to achieve higher efficiency.

  8. 160 W 800 fs Yb:YAG single crystal fiber amplifier without CPA.

    PubMed

    Markovic, Vesna; Rohrbacher, Andreas; Hofmann, Peter; Pallmann, Wolfgang; Pierrot, Simonette; Resan, Bojan

    2015-10-05

    We demonstrate a compact and simple two-stage Yb:YAG single crystal fiber amplifier which delivers 160 W average power, 800 fs pulses without chirped pulse amplification. This is the highest average power of femtosecond laser based on SCF. Additionally, we demonstrate the highest small signal gain of 32.5 dB from the SCF in the first stage and the highest extraction efficiency of 42% in the second stage. The excellent performance of the second stage was obtained using the bidirectional pumping scheme, which is applied to SCF for the first time.

  9. Initial Business Case Analysis of Two Integrated Heat Pump HVAC Systems for Near-Zero-Energy Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baxter, Van D

    2006-11-01

    The long range strategic goal of the Department of Energy's Building Technologies (DOE/BT) Program is to create, by 2020, technologies and design approaches that enable the construction of net-zero energy homes at low incremental cost (DOE/BT 2005). A net zero energy home (NZEH) is a residential building with greatly reduced needs for energy through efficiency gains, with the balance of energy needs supplied by renewable technologies. While initially focused on new construction, these technologies and design approaches are intended to have application to buildings constructed before 2020 as well resulting in substantial reduction in energy use for all building typesmore » and ages. DOE/BT's Emerging Technologies (ET) team is working to support this strategic goal by identifying and developing advanced heating, ventilating, air-conditioning, and water heating (HVAC/WH) technology options applicable to NZEHs. Although the energy efficiency of heating, ventilating, and air-conditioning (HVAC) equipment has increased substantially in recent years, new approaches are needed to continue this trend. Dramatic efficiency improvements are necessary to enable progress toward the NZEH goals, and will require a radical rethinking of opportunities to improve system performance. The large reductions in HVAC energy consumption necessary to support the NZEH goals require a systems-oriented analysis approach that characterizes each element of energy consumption, identifies alternatives, and determines the most cost-effective combination of options. In particular, HVAC equipment must be developed that addresses the range of special needs of NZEH applications in the areas of reduced HVAC and water heating energy use, humidity control, ventilation, uniform comfort, and ease of zoning. In FY05 ORNL conducted an initial Stage 1 (Applied Research) scoping assessment of HVAC/WH systems options for future NZEHs to help DOE/BT identify and prioritize alternative approaches for further development. Eleven system concepts with central air distribution ducting and nine multi-zone systems were selected and their annual and peak demand performance estimated for five locations: Atlanta (mixed-humid), Houston (hot-humid), Phoenix (hot-dry), San Francisco (marine), and Chicago (cold). Performance was estimated by simulating the systems using the TRNSYS simulation engine (Solar Energy Laboratory et al. 2006) in two 1800-ft{sup 2} houses--a Building America (BA) benchmark house and a prototype NZEH taken from BEopt results at the take-off (or crossover) point (i.e., a house incorporating those design features such that further progress towards ZEH is through the addition of photovoltaic power sources, as determined by current BEopt analyses conducted by NREL). Results were summarized in a project report, 'HVAC Equipment Design options for Near-Zero-Energy Homes--A Stage 2 Scoping Assessment,' ORNL/TM-2005/194 (Baxter 2005). The 2005 study report describes the HVAC options considered, the ranking criteria used, and the system rankings by priority. Table 1 summarizes the energy savings potential of the highest scoring options from the 2005 study for all five locations.« less

  10. Asymmetric Anterior Distraction for Transversely Distorted Maxilla and Midfacial Anteroposterior Deficiency in a Patient With Cleft Lip/Palate: Two-Stage Surgical Approach.

    PubMed

    Hirata, Kae; Tanikawa, Chihiro; Aikawa, Tomonao; Ishihama, Kohji; Kogo, Mikihiko; Iida, Seiji; Yamashiro, Takashi

    2016-07-01

    The present report describes a male patient with a unilateral cleft lip and palate who presented with midfacial anteroposterior and transverse deficiency. Correction involved a two-stage surgical-orthodontic approach: asymmetric anterior distraction of the segmented maxilla followed by two-jaw surgery (LeFort I and bilateral sagittal splitting ramus osteotomies). The present case demonstrates that the asymmetric elongation of the maxilla with anterior distraction is an effective way to correct a transversely distorted alveolar form and midfacial anteroposterior deficiency. Furthermore, successful tooth movement was demonstrated in the new bone created by distraction.

  11. Financial Distress Prediction Using Discrete-time Hazard Model and Rating Transition Matrix Approach

    NASA Astrophysics Data System (ADS)

    Tsai, Bi-Huei; Chang, Chih-Huei

    2009-08-01

    Previous studies used constant cut-off indicator to distinguish distressed firms from non-distressed ones in the one-stage prediction models. However, distressed cut-off indicator must shift according to economic prosperity, rather than remains fixed all the time. This study focuses on Taiwanese listed firms and develops financial distress prediction models based upon the two-stage method. First, this study employs the firm-specific financial ratio and market factors to measure the probability of financial distress based on the discrete-time hazard models. Second, this paper further focuses on macroeconomic factors and applies rating transition matrix approach to determine the distressed cut-off indicator. The prediction models are developed by using the training sample from 1987 to 2004, and their levels of accuracy are compared with the test sample from 2005 to 2007. As for the one-stage prediction model, the model in incorporation with macroeconomic factors does not perform better than that without macroeconomic factors. This suggests that the accuracy is not improved for one-stage models which pool the firm-specific and macroeconomic factors together. In regards to the two stage models, the negative credit cycle index implies the worse economic status during the test period, so the distressed cut-off point is adjusted to increase based on such negative credit cycle index. After the two-stage models employ such adjusted cut-off point to discriminate the distressed firms from non-distressed ones, their error of misclassification becomes lower than that of one-stage ones. The two-stage models presented in this paper have incremental usefulness in predicting financial distress.

  12. Task Scheduling in Desktop Grids: Open Problems

    NASA Astrophysics Data System (ADS)

    Chernov, Ilya; Nikitina, Natalia; Ivashko, Evgeny

    2017-12-01

    We survey the areas of Desktop Grid task scheduling that seem to be insufficiently studied so far and are promising for efficiency, reliability, and quality of Desktop Grid computing. These topics include optimal task grouping, "needle in a haystack" paradigm, game-theoretical scheduling, domain-imposed approaches, special optimization of the final stage of the batch computation, and Enterprise Desktop Grids.

  13. Public purchasers contracting external primary care providers in Central America for better responsiveness, efficiency of health care and public governance: issues and challenges.

    PubMed

    Macq, Jean; Martiny, Patrick; Villalobos, Luis Bernardo; Solis, Alejandro; Miranda, Jose; Mendez, Hilda Cecilia; Collins, Charles

    2008-09-01

    Several national health systems in Latin America initiated health reforms to counter widespread criticisms of low equity and efficiency. For public purchasing agencies, these reforms often consisted in contracting external providers for primary care provision. This paper intends to clarify both the complex and intertwined issues characterizing such contracting as well as health system performances within the context of four Central American countries. It results from a European Commission financed project lead between 2002 and 2005, involving participants from Costa Rica, Guatemala, Nicaragua, Salvador, United Kingdom, Netherlands and Belgium, whose aim was to promote exchanges between these participants. The findings presented in this paper are the results of a two stage process: (a) the design of an initial analytical framework, built upon findings from the literature, interlinking characteristics of contractual relation with health systems performances criteria and (b) the use of that framework in four case studies to identify cross-cutting issues. This paper reinforces two pivotal findings: (a) contracting requires not only technical, but also political choices and (b) it cannot be considered as a mechanical process. The unpredictability of its evolution requires a flexible and reactive approach. This should be better assimilated by national and international organizations involved in health services provision, so as to progressively come out of dogmatic approaches in deciding to initiate contractual relation with external providers for primary care provision.

  14. Two-step single slope/SAR ADC with error correction for CMOS image sensor.

    PubMed

    Tang, Fang; Bermak, Amine; Amira, Abbes; Amor Benammar, Mohieddine; He, Debiao; Zhao, Xiaojin

    2014-01-01

    Conventional two-step ADC for CMOS image sensor requires full resolution noise performance in the first stage single slope ADC, leading to high power consumption and large chip area. This paper presents an 11-bit two-step single slope/successive approximation register (SAR) ADC scheme for CMOS image sensor applications. The first stage single slope ADC generates a 3-bit data and 1 redundant bit. The redundant bit is combined with the following 8-bit SAR ADC output code using a proposed error correction algorithm. Instead of requiring full resolution noise performance, the first stage single slope circuit of the proposed ADC can tolerate up to 3.125% quantization noise. With the proposed error correction mechanism, the power consumption and chip area of the single slope ADC are significantly reduced. The prototype ADC is fabricated using 0.18 μ m CMOS technology. The chip area of the proposed ADC is 7 μ m × 500 μ m. The measurement results show that the energy efficiency figure-of-merit (FOM) of the proposed ADC core is only 125 pJ/sample under 1.4 V power supply and the chip area efficiency is 84 k  μ m(2) · cycles/sample.

  15. Study on a high capacity two-stage free piston Stirling cryocooler working around 30 K

    NASA Astrophysics Data System (ADS)

    Wang, Xiaotao; Zhu, Jian; Chen, Shuai; Dai, Wei; Li, Ke; Pang, Xiaomin; Yu, Guoyao; Luo, Ercang

    2016-12-01

    This paper presents a two-stage high-capacity free-piston Stirling cryocooler driven by a linear compressor to meet the requirement of the high temperature superconductor (HTS) motor applications. The cryocooler system comprises a single piston linear compressor, a two-stage free piston Stirling cryocooler and a passive oscillator. A single stepped displacer configuration was adopted. A numerical model based on the thermoacoustic theory was used to optimize the system operating and structure parameters. Distributions of pressure wave, phase differences between the pressure wave and the volume flow rate and different energy flows are presented for a better understanding of the system. Some characterizing experimental results are presented. Thus far, the cryocooler has reached a lowest cold-head temperature of 27.6 K and achieved a cooling power of 78 W at 40 K with an input electric power of 3.2 kW, which indicates a relative Carnot efficiency of 14.8%. When the cold-head temperature increased to 77 K, the cooling power reached 284 W with a relative Carnot efficiency of 25.9%. The influences of different parameters such as mean pressure, input electric power and cold-head temperature are also investigated.

  16. Impact of Financial Liberalization on Banking Sectors Performance from Central and Eastern European Countries

    PubMed Central

    Andries, Alin Marius; Capraru, Bogdan

    2013-01-01

    In this paper we analyse the impact of financial liberalization and reforms on the banking performance in 17 countries from CEE for the period 2004–2008 using a two-stage empirical model that involves estimating bank performance in the first stage and assessing its determinants in the second one. From our analysis it results that banks from CEE countries with higher level of liberalization and openness are able to increase cost efficiency and eventually to offer cheaper services to clients. Banks from non-member EU countries are less cost efficient but experienced much higher total productivity growth level, and large sized banks are much more cost efficient than medium and small banks, while small sized banks show the highest growth in terms of productivity. PMID:23555745

  17. Impact of financial liberalization on banking sectors performance from central and eastern European countries.

    PubMed

    Andries, Alin Marius; Capraru, Bogdan

    2013-01-01

    In this paper we analyse the impact of financial liberalization and reforms on the banking performance in 17 countries from CEE for the period 2004-2008 using a two-stage empirical model that involves estimating bank performance in the first stage and assessing its determinants in the second one. From our analysis it results that banks from CEE countries with higher level of liberalization and openness are able to increase cost efficiency and eventually to offer cheaper services to clients. Banks from non-member EU countries are less cost efficient but experienced much higher total productivity growth level, and large sized banks are much more cost efficient than medium and small banks, while small sized banks show the highest growth in terms of productivity.

  18. Accounting for standard errors of vision-specific latent trait in regression models.

    PubMed

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  19. Two-stage color palettization for error diffusion

    NASA Astrophysics Data System (ADS)

    Mitra, Niloy J.; Gupta, Maya R.

    2002-06-01

    Image-adaptive color palettization chooses a decreased number of colors to represent an image. Palettization is one way to decrease storage and memory requirements for low-end displays. Palettization is generally approached as a clustering problem, where one attempts to find the k palette colors that minimize the average distortion for all the colors in an image. This would be the optimal approach if the image was to be displayed with each pixel quantized to the closest palette color. However, to improve the image quality the palettization may be followed by error diffusion. In this work, we propose a two-stage palettization where the first stage finds some m << k clusters, and the second stage chooses palette points that cover the spread of each of the M clusters. After error diffusion, this method leads to better image quality at less computational cost and with faster display speed than full k-means palettization.

  20. [Effects of postponed basal nitrogen application with reduced nitrogen rate on grain yield and nitrogen use efficiency of south winter wheat].

    PubMed

    Zhang, Lei; Shao, Yu Hang; Gu, Shi Lu; Hu, Hang; Zhang, Wei Wei; Tian, Zhong Wei; Jiang, Dong; Dai, Ting Bo

    2016-12-01

    Excessive nitrogen (N) fertilizer application has led to a reduction of nitrogen use efficiency and environmental problems. It was of great significance for high-yield and high-efficiency cultivation to reduce N fertilizer application with modified application strategies. A two-year field experiment was conducted to study effects of different N application rates at basal and seedling application stages on grain yield and nitrogen use efficiency. Taking the conventional nitrogen application practice (240 kg N·hm -2 with application at basal, jointing, and booting stages at ratios of 5:3:2, respectively) as control, a field trial was conducted at different N application rates (240, 180 and 150 kg N·hm -2 , N 240 , N 180 and N 150 , respectively) and different application times [basal (L 0 ), fourth (L 4 ) and sixth leaf stage (L 6 )] to investigate the effects on grain yield and nitrogen use efficiency. The results indicated that grain yield decreased along with reducing the N application rate, but it had no significant difference between N 240 and N 180 while decreased significantly under N 150 . Nitrogen agronomy and recovery efficiency were all highest under N 180 . Among different N application stages, grain yield and nitrogen use efficiency were highest under L 4 . N 180 L 4 had no signifi-cant difference with control in grain yield, but its nitrogen use efficiency was significantly higher. The leaf area index, flag leaf photosynthesis rate, leaf nitrogen content, activity of nitrogen reductase and glutamine synthase in flag leaf, dry matter and N accumulation after jointing of N 180 L 4 had no significant difference with control. In an overall view, postponing basal N fertilizer application at reduced nitrogen rate could maintain high yield and improve nitrogen use efficiency through improving photosynthetic production capacity and promoting nitrogen uptake and assimilation.

  1. Efficient and lightweight current leads

    NASA Astrophysics Data System (ADS)

    Bromberg, L.; Dietz, A. J.; Michael, P. C.; Gold, C.; Cheadle, M.

    2014-01-01

    Current leads generate substantial cryogenic heat loads in short length High Temperature Superconductor (HTS) distribution systems. Thermal conduction, as well as Joule losses (I2R) along the current leads, comprises the largest cryogenic loads for short distribution systems. Current leads with two temperature stages have been designed, constructed and tested, with the goal of minimizing the electrical power consumption, and to provide thermal margin for the cable. We present the design of a two-stage current lead system, operating at 140 K and 55 K. This design is very attractive when implemented with a turbo-Brayton cycle refrigerator (two-stage), with substantial power and weight reduction. A heat exchanger is used at each temperature station, with conduction-cooled stages in-between. Compact, efficient heat exchangers are challenging, because of the gaseous coolant. Design, optimization and performance of the heat exchangers used for the current leads will be presented. We have made extensive use of CFD models for optimizing hydraulic and thermal performance of the heat exchangers. The methodology and the results of the optimization process will be discussed. The use of demountable connections between the cable and the terminations allows for ease of assembly, but require means of aggressively cooling the region of the joint. We will also discuss the cooling of the joint. We have fabricated a 7 m, 5 kA cable with second generation HTS tapes. The performance of the system will be described.

  2. Biogas production of Chicken Manure by Two-stage fermentation process

    NASA Astrophysics Data System (ADS)

    Liu, Xin Yuan; Wang, Jing Jing; Nie, Jia Min; Wu, Nan; Yang, Fang; Yang, Ren Jie

    2018-06-01

    This paper performs a batch experiment for pre-acidification treatment and methane production from chicken manure by the two-stage anaerobic fermentation process. Results shows that the acetate was the main component in volatile fatty acids produced at the end of pre-acidification stage, accounting for 68% of the total amount. The daily biogas production experienced three peak period in methane production stage, and the methane content reached 60% in the second period and then slowly reduced to 44.5% in the third period. The cumulative methane production was fitted by modified Gompertz equation, and the kinetic parameters of the methane production potential, the maximum methane production rate and lag phase time were 345.2 ml, 0.948 ml/h and 343.5 h, respectively. The methane yield of 183 ml-CH4/g-VSremoved during the methane production stage and VS removal efficiency of 52.7% for the whole fermentation process were achieved.

  3. Forecasting resource-allocation decisions under climate uncertainty: fire suppression with assessment of net benefits of research

    Treesearch

    Jeffrey P. Prestemon; Geoffrey H. Donovan

    2008-01-01

    Making input decisions under climate uncertainty often involves two-stage methods that use expensive and opaque transfer functions. This article describes an alternative, single-stage approach to such decisions using forecasting methods. The example shown is for preseason fire suppression resource contracting decisions faced by the United States Forest Service. Two-...

  4. Within-Plant Distribution of Adult Brown Stink Bug (Hemiptera: Pentatomidae) in Corn and Its Implications on Stink Bug Sampling and Management in Corn.

    PubMed

    Babu, Arun; Reisig, Dominic D

    2018-05-29

    Brown stink bug, Euschistus servus (Say) (Hemiptera: Pentatomidae), has emerged as a significant pest of corn, Zea mays L., in the southeastern United States. A 2-year study was conducted to quantify the within-plant vertical distribution of adult E. servus in field corn, to examine potential plant phenological characteristics associated with their observed distribution, and to select an efficient partial plant sampling method for adult E. servus population estimation. Within-plant distribution of adult E. servus was influenced by corn phenology. On V4- and V6-stage corn, most of the individuals were found at the base of the plant. Mean relative vertical position of adult E. servus population in corn plants trended upward between the V6 and V14 growth stages. During the reproductive corn growth stages (R1, R2, and R4), a majority of the adult E. servus were concentrated around developing ears. Based on the multiple selection criteria, during V4-V6 corn growth stages, either the corn stalk below the lowest green leaf or basal stratum method could employ for efficient E. servus sampling. Similarly, on reproductive corn growth stages (R1-R4), the plant parts between two leaves above and three leaves below the primary ear leaf were found to be areas to provide the most precise and cost-efficient sampling method. The results from our study successfully demonstrate that in the early vegetative and reproductive stages of corn, scouts can replace the current labor-intensive whole-plant search method with a more efficient, specific partial plant sampling method for E. servus population estimation.

  5. Hospital efficiency and transaction costs: a stochastic frontier approach.

    PubMed

    Ludwig, Martijn; Groot, Wim; Van Merode, Frits

    2009-07-01

    The make-or-buy decision of organizations is an important issue in the transaction cost theory, but is usually not analyzed from an efficiency perspective. Hospitals frequently have to decide whether to outsource or not. The main question we address is: Is the make-or-buy decision affected by the efficiency of hospitals? A one-stage stochastic cost frontier equation is estimated for Dutch hospitals. The make-or-buy decisions of ten different hospital services are used as explanatory variables to explain efficiency of hospitals. It is found that for most services the make-or-buy decision is not related to efficiency. Kitchen services are an important exception to this. Large hospitals tend to outsource less, which is supported by efficiency reasons. For most hospital services, outsourcing does not significantly affect the efficiency of hospitals. The focus on the make-or-buy decision may therefore be less important than often assumed.

  6. Cooperative Monitoring Center Occasional Paper/18: Maritime Cooperation Between India and Pakistan: Building Confidence at Sea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    SIDDIQA-AGHA,AYESHA

    2000-11-01

    This paper discusses ways in which the navies of both India and Pakistan can cooperate on issues of maritime and naval significance. Although the militaries and navies of the two countries have traditionally seen each other as rivals, international economic developments make cooperation imperative. South Asia requires an approach that can alter the existing hostile images and perceptions. This can be achieved through developing an incremental approach towards confidence building that would allow consistency and help build confidence gradually. The aim is to make confidence building a sustainable activity that would help transform hostile images and build cooperative and nonhostilemore » relationships. This paper proposes a five-step model to suggest what the two navies can do jointly to build confidence, with the ultimate goal of naval arms control. The steps include (1) the Signaling Stage to initiate communication between the two navies, (2) the Warming-Up Stage to build confidence through nonmilitary joint ventures, (3) the Handshake Stage to build confidence between the two navies through military joint ventures, (4) the Problem-Solving Stage to resolve outstanding disputes, and (5) the Final Nod Stage to initiate naval arms control. This model would employ communication, navigation, and remote sensing technologies to achieve success.« less

  7. The Two-Stage Examination: A Method to Assess Individual Competence and Collaborative Problem Solving in Medical Students

    PubMed Central

    Morton, David A.; Pippitt, Karly; Lamb, Sara; Colbert-Getz, Jorie M.

    2016-01-01

    Problem Effectively solving problems as a team under stressful conditions is central to medical practice; however, because summative examinations in medical education must test individual competence, they are typically solitary assessments. Approach Using two-stage examinations, in which students first answer questions individually (Stage 1) and then discuss them in teams prior to resubmitting their answers (Stage 2), is one method for rectifying this discordance. On the basis of principles of social constructivism, the authors hypothesized that two-stage examinations would lead to better retention of, specifically, items answered incorrectly at Stage 1. In fall 2014, they divided 104 first-year medical students into two groups of 52 students. Groups alternated each week between taking one- and two-stage examinations such that each student completed 6 one-stage and 6 two-stage examinations. The authors reassessed 61 concepts on a final examination and, using the Wilcoxon signed ranked tests, compared performance for all concepts and for just those students initially missed, between Stages 1 and 2. Outcomes Final examination performance on all previously assessed concepts was not significantly different between the one-and two-stage conditions (P = .77); however, performance on only concepts that students initially answered incorrectly on a prior examination improved by 12% for the two-stage condition relative to the one-stage condition (P = .02, r = 0.17). Next Steps Team assessment may be most useful for assessing concepts students find difficult, as opposed to all content. More research is needed to determine whether these results apply to all medical school topics and student cohorts. PMID:27049544

  8. Health information technology vendor selection strategies and total factor productivity.

    PubMed

    Ford, Eric W; Huerta, Timothy R; Menachemi, Nir; Thompson, Mark A; Yu, Feliciano

    2013-01-01

    The aim of this study was to compare health information technology (HIT) adoption strategies' relative performance on hospital-level productivity measures. The American Hospital Association's Annual Survey and Healthcare Information and Management Systems Society Analytics for fiscal years 2002 through 2007 were used for this study. A two-stage approach is employed. First, a Malmquist model is specified to calculate hospital-level productivity measures. A logistic regression model is then estimated to compare the three HIT adoption strategies' relative performance on the newly constructed productivity measures. The HIT vendor selection strategy impacts the amount of technological change required of an organization but does not appear to have either a positive or adverse impact on technical efficiency or total factor productivity. The higher levels in technological change experienced by hospitals using the best of breed and best of suite HIT vendor selection strategies may have a more direct impact on the organization early on in the process. However, these gains did not appear to translate into either increased technical efficiency or total factor productivity during the period studied. Over a longer period, one HIT vendor selection strategy may yet prove to be more effective at improving efficiency and productivity.

  9. The Potential of a Cascaded TEG System for Waste Heat Usage in Railway Vehicles

    NASA Astrophysics Data System (ADS)

    Wilbrecht, Sebastian; Beitelschmidt, Michael

    2018-02-01

    This work focuses on the conceptual design and optimization of a near series prototype of a high-power thermoelectric generator system (TEG system) for diesel-electric locomotives. The replacement of the silencer in the exhaust line enables integration with already existing vehicles. However, compliance with the technical and legal frameworks and the assembly space requirements is just as important as the limited exhaust back pressure, the high power density and the low life cycle costs. A special emphasis is given to the comparison of cascaded two-stage Bi2Te3 and Mg2Si0.4Sn0.6/MnSi1.81 modules with single-stage Bi2Te3 modules, both manufactured in lead-frame technology. In addition to the numerous, partly competing boundary conditions for the use in rail vehicles, the additional degree of freedom from the cascaded thermoelectric modules (TEM) is considered. The problem is investigated by coupling one-dimensional multi-domain simulations with an optimization framework using a genetic algorithm. The achievable electrical power of the single-stage system is significantly higher, at 3.2 kW, than the performance of the two-stage system (2.5 kW). Although the efficiency of the two-stage system is 44.2% higher than the single-stage system, the overall power output is 22.8% lower. This is because the lower power density and the lower number of TEM more than compensates the better efficiency. Hence, the available installation space, and thus the power density, is a critical constraint for the design of TEG systems. Furthermore, for applications recovering exhaust gas enthalpy, the large temperature drop across the heat exchanger is characteristic and must be considered carefully within the design process.

  10. The Potential of a Cascaded TEG System for Waste Heat Usage in Railway Vehicles

    NASA Astrophysics Data System (ADS)

    Wilbrecht, Sebastian; Beitelschmidt, Michael

    2018-06-01

    This work focuses on the conceptual design and optimization of a near series prototype of a high-power thermoelectric generator system (TEG system) for diesel-electric locomotives. The replacement of the silencer in the exhaust line enables integration with already existing vehicles. However, compliance with the technical and legal frameworks and the assembly space requirements is just as important as the limited exhaust back pressure, the high power density and the low life cycle costs. A special emphasis is given to the comparison of cascaded two-stage Bi2Te3 and Mg2Si0.4Sn0.6/MnSi1.81 modules with single-stage Bi2Te3 modules, both manufactured in lead-frame technology. In addition to the numerous, partly competing boundary conditions for the use in rail vehicles, the additional degree of freedom from the cascaded thermoelectric modules (TEM) is considered. The problem is investigated by coupling one-dimensional multi-domain simulations with an optimization framework using a genetic algorithm. The achievable electrical power of the single-stage system is significantly higher, at 3.2 kW, than the performance of the two-stage system (2.5 kW). Although the efficiency of the two-stage system is 44.2% higher than the single-stage system, the overall power output is 22.8% lower. This is because the lower power density and the lower number of TEM more than compensates the better efficiency. Hence, the available installation space, and thus the power density, is a critical constraint for the design of TEG systems. Furthermore, for applications recovering exhaust gas enthalpy, the large temperature drop across the heat exchanger is characteristic and must be considered carefully within the design process.

  11. A Dynamic Approach to Addressing Observation-Minus-Forecast Mean Differences in a Land Surface Skin Temperature Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Draper, Clara; Reichle, Rolf; De Lannoy, Gabrielle; Scarino, Benjamin

    2015-01-01

    In land data assimilation, bias in the observation-minus-forecast (O-F) residuals is typically removed from the observations prior to assimilation by rescaling the observations to have the same long-term mean (and higher-order moments) as the corresponding model forecasts. Such observation rescaling approaches require a long record of observed and forecast estimates, and an assumption that the O-F mean differences are stationary. A two-stage observation bias and state estimation filter is presented, as an alternative to observation rescaling that does not require a long data record or assume stationary O-F mean differences. The two-stage filter removes dynamic (nonstationary) estimates of the seasonal scale O-F mean difference from the assimilated observations, allowing the assimilation to correct the model for synoptic-scale errors without adverse effects from observation biases. The two-stage filter is demonstrated by assimilating geostationary skin temperature (Tsk) observations into the Catchment land surface model. Global maps of the O-F mean differences are presented, and the two-stage filter is evaluated for one year over the Americas. The two-stage filter effectively removed the Tsk O-F mean differences, for example the GOES-West O-F mean difference at 21:00 UTC was reduced from 5.1 K for a bias-blind assimilation to 0.3 K. Compared to independent in situ and remotely sensed Tsk observations, the two-stage assimilation reduced the unbiased Root Mean Square Difference (ubRMSD) of the modeled Tsk by 10 of the open-loop values.

  12. Performance of Single-Stage Turbine of Mark 25 Torpedo Power Plant with Two Nozzles and Three Rotor-Blade Designs

    NASA Technical Reports Server (NTRS)

    Schum, Harold J.; Whitney, Warren J.

    1949-01-01

    A single-stage modification of the turbine from a Mark 25 torpedo power plant was investigated to determine the performance with two nozzles and three rotor-blade designs. The performance was evaluated in terms of brake, rotor, and blade efficiencies at pressure ratios of 8, 15 (design), and 20. The blade efficiencies with the two nozzles are compared with those obtained with four other nozzles previously investigated with the same three rotor-blade designs. Blade efficiency with the cast nozzle of rectangular cross section (J) was higher than that with the circular reamed nozzle (K) at all speeds and pressure ratios with a rotor having a 0.45-inch 17 degree-inlet-angle blades. The efficiencies for both these nozzles were generally low compared with those of the four other nozzles previously investigated in combination with this rotor. At pressure ratios of 15 and 20, the blade efficiencies with nozzle K and the two rotors with 0.40-inch blades having different inlet angles were higher than with the four other nozzles, but the efficiency with nozzle J was generally low. Increasing the blade inlet angle from 17 degrees to 20 degrees had little effect on turbine performance, whereas changing the blade length from 0.40 to 0.45 inch had a marked effect. Although a slight correlation of efficiency with nozzle size was noted for the rotor with 0.45-inch 17 degree-inlet-angle blades, no such effect was discernible ,for the two rotors with 0.40-inch blades.Losses in the supersonic air stream resulting from the complex flow path in the small air passages are probably a large percentage of the total losses, and apparently the effects of changing nozzle size and shape within the limits investigated are of secondary importance.

  13. Third-order 2N-storage Runge-Kutta schemes with error control

    NASA Technical Reports Server (NTRS)

    Carpenter, Mark H.; Kennedy, Christopher A.

    1994-01-01

    A family of four-stage third-order explicit Runge-Kutta schemes is derived that requires only two storage locations and has desirable stability characteristics. Error control is achieved by embedding a second-order scheme within the four-stage procedure. Certain schemes are identified that are as efficient and accurate as conventional embedded schemes of comparable order and require fewer storage locations.

  14. Efficiency of photochemical stages of photosynthesis in purple bacteria (a critical survey).

    PubMed

    Borisov, A Yu

    2014-03-01

    Based on currently available data, the energy transfer efficiency in the successive photophysical and photochemical stages has been analyzed for purple bacteria. This analysis covers the stages starting from migration of the light-induced electronic excitations from the bulk antenna pigments to the reaction centers up to irreversible stage of the electron transport along the transmembrane chain of cofactors-carriers. Some natural factors are revealed that significantly increase the rates of efficient processes in these stages. The influence on their efficiency by the "bottleneck" in the energy migration chain is established. The overall quantum yield of photosynthesis in these stages is determined.

  15. Development and manufacture of reactive-transfer-printed CIGS photovoltaic modules

    NASA Astrophysics Data System (ADS)

    Eldada, Louay; Sang, Baosheng; Lu, Dingyuan; Stanbery, Billy J.

    2010-09-01

    In recent years, thin-film photovoltaic (PV) companies started realizing their low manufacturing cost potential, and grabbing an increasingly larger market share from multicrystalline silicon companies. Copper Indium Gallium Selenide (CIGS) is the most promising thin-film PV material, having demonstrated the highest energy conversion efficiency in both cells and modules. However, most CIGS manufacturers still face the challenge of delivering a reliable and rapid manufacturing process that can scale effectively and deliver on the promise of this material system. HelioVolt has developed a reactive transfer process for CIGS absorber formation that has the benefits of good compositional control, high-quality CIGS grains, and a fast reaction. The reactive transfer process is a two stage CIGS fabrication method. Precursor films are deposited onto substrates and reusable print plates in the first stage, while in the second stage, the CIGS layer is formed by rapid heating with Se confinement. High quality CIGS films with large grains were produced on a full-scale manufacturing line, and resulted in high-efficiency large-form-factor modules. With 14% cell efficiency and 12% module efficiency, HelioVolt started to commercialize the process on its first production line with 20 MW nameplate capacity.

  16. 2D and 3D impellers of centrifugal compressors - advantages, shortcomings and fields of application

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Reksrin, A.; Drozdov, A.

    2017-08-01

    The simplified equations are presented for calculation of inlet dimensions and velocity values for impellers with three-dimensional blades located in axial and radial part of an impeller (3D impeller) and with two-dimensional blades in radial part (2D). Considerations concerning loss coefficients of 3D and 2D impellers at different design flow rate coefficients are given. The tendency of reduction of potential advantages of 3D impellers at medium and small design flow rate coefficients is shown. The data on high-efficiency compressors and stages with 2D impellers coefficients designed by the authors are presented. The reached efficiency level of 88 - 90% makes further increase of efficiency by the application of 3D impellers doubtful. CFD-analysis of stage candidates with medium flow rate coefficient with 3D and 2D impellers revealed specific problems. In some cases the constructive advantage of a 2D impeller is smaller hub ratio. It makes possible the reaching of higher efficiency. From other side, there is a positive tendency of gas turbine drive RPM increase. 3D impellers have no alternative for stages with high flow rate coefficients matching high-speed drive.

  17. Computational Assessment of the Aerodynamic Performance of a Variable-Speed Power Turbine for Large Civil Tilt-Rotor Application

    NASA Technical Reports Server (NTRS)

    Welch, Gerard E.

    2011-01-01

    The main rotors of the NASA Large Civil Tilt-Rotor notional vehicle operate over a wide speed-range, from 100% at take-off to 54% at cruise. The variable-speed power turbine offers one approach by which to effect this speed variation. Key aero-challenges include high work factors at cruise and wide (40 to 60 deg.) incidence variations in blade and vane rows over the speed range. The turbine design approach must optimize cruise efficiency and minimize off-design penalties at take-off. The accuracy of the off-design incidence loss model is therefore critical to the turbine design. In this effort, 3-D computational analyses are used to assess the variation of turbine efficiency with speed change. The conceptual design of a 4-stage variable-speed power turbine for the Large Civil Tilt-Rotor application is first established at the meanline level. The design of 2-D airfoil sections and resulting 3-D blade and vane rows is documented. Three-dimensional Reynolds Averaged Navier-Stokes computations are used to assess the design and off-design performance of an embedded 1.5-stage portion-Rotor 1, Stator 2, and Rotor 2-of the turbine. The 3-D computational results yield the same efficiency versus speed trends predicted by meanline analyses, supporting the design choice to execute the turbine design at the cruise operating speed.

  18. Offline signature verification using convolution Siamese network

    NASA Astrophysics Data System (ADS)

    Xing, Zi-Jian; Yin, Fei; Wu, Yi-Chao; Liu, Cheng-Lin

    2018-04-01

    This paper presents an offline signature verification approach using convolutional Siamese neural network. Unlike the existing methods which consider feature extraction and metric learning as two independent stages, we adopt a deepleaning based framework which combines the two stages together and can be trained end-to-end. The experimental results on two offline public databases (GPDSsynthetic and CEDAR) demonstrate the superiority of our method on the offline signature verification problem.

  19. Highly efficient in vivo delivery of PMO into regenerating myotubes and rescue in laminin-α2 chain-null congenital muscular dystrophy mice.

    PubMed

    Aoki, Yoshitsugu; Nagata, Tetsuya; Yokota, Toshifumi; Nakamura, Akinori; Wood, Matthew J A; Partridge, Terence; Takeda, Shin'ichi

    2013-12-15

    Phosphorodiamidate morpholino oligomer (PMO)-mediated exon skipping is among the more promising approaches to the treatment of several neuromuscular disorders including Duchenne muscular dystrophy. The main weakness of this approach arises from the low efficiency and sporadic nature of the delivery of charge-neutral PMO into muscle fibers, the mechanism of which is unknown. In this study, to test our hypothesis that muscle fibers take up PMO more efficiently during myotube formation, we induced synchronous muscle regeneration by injection of cardiotoxin into the tibialis anterior muscle of Dmd exon 52-deficient mdx52 and wild-type mice. Interestingly, by in situ hybridization, we detected PMO mainly in embryonic myosin heavy chain-positive regenerating fibers. In addition, we showed that PMO or 2'-O-methyl phosphorothioate is taken up efficiently into C2C12 myotubes when transfected 24-72 h after the induction of differentiation but is poorly taken up into undifferentiated C2C12 myoblasts suggesting efficient uptake of PMO in the early stages of C2C12 myotube formation. Next, we tested the therapeutic potential of PMO for laminin-α2 chain-null dy(3K)/dy(3K) mice: a model of merosin-deficient congenital muscular dystrophy (MDC1A) with active muscle regeneration. We confirmed the recovery of laminin-α2 chain and slightly prolonged life span following skipping of the mutated exon 4 in dy(3K)/dy(3K) mice. These findings support the idea that PMO entry into fibers is dependent on a developmental stage in myogenesis rather than on dystrophinless muscle membranes and provide a platform for developing PMO-mediated therapies for a variety of muscular disorders, such as MDC1A, that involve active muscle regeneration.

  20. Parameter Estimation of Computationally Expensive Watershed Models Through Efficient Multi-objective Optimization and Interactive Decision Analytics

    NASA Astrophysics Data System (ADS)

    Akhtar, Taimoor; Shoemaker, Christine

    2016-04-01

    Watershed model calibration is inherently a multi-criteria problem. Conflicting trade-offs exist between different quantifiable calibration criterions indicating the non-existence of a single optimal parameterization. Hence, many experts prefer a manual approach to calibration where the inherent multi-objective nature of the calibration problem is addressed through an interactive, subjective, time-intensive and complex decision making process. Multi-objective optimization can be used to efficiently identify multiple plausible calibration alternatives and assist calibration experts during the parameter estimation process. However, there are key challenges to the use of multi objective optimization in the parameter estimation process which include: 1) multi-objective optimization usually requires many model simulations, which is difficult for complex simulation models that are computationally expensive; and 2) selection of one from numerous calibration alternatives provided by multi-objective optimization is non-trivial. This study proposes a "Hybrid Automatic Manual Strategy" (HAMS) for watershed model calibration to specifically address the above-mentioned challenges. HAMS employs a 3-stage framework for parameter estimation. Stage 1 incorporates the use of an efficient surrogate multi-objective algorithm, GOMORS, for identification of numerous calibration alternatives within a limited simulation evaluation budget. The novelty of HAMS is embedded in Stages 2 and 3 where an interactive visual and metric based analytics framework is available as a decision support tool to choose a single calibration from the numerous alternatives identified in Stage 1. Stage 2 of HAMS provides a goodness-of-fit measure / metric based interactive framework for identification of a small subset (typically less than 10) of meaningful and diverse set of calibration alternatives from the numerous alternatives obtained in Stage 1. Stage 3 incorporates the use of an interactive visual analytics framework for decision support in selection of one parameter combination from the alternatives identified in Stage 2. HAMS is applied for calibration of flow parameters of a SWAT model, (Soil and Water Assessment Tool) designed to simulate flow in the Cannonsville watershed in upstate New York. Results from the application of HAMS to Cannonsville indicate that efficient multi-objective optimization and interactive visual and metric based analytics can bridge the gap between the effective use of both automatic and manual strategies for parameter estimation of computationally expensive watershed models.

  1. Lightweight multiple output converter development

    NASA Technical Reports Server (NTRS)

    Kisch, J. J.; Martinelli, R. M.

    1978-01-01

    A high frequency, multiple output power conditioner was developed and breadboarded using an eight-stage capacitor diode voltage multiplier to provide +1200 Vdc, and a three-stage for -350 Vdc. In addition, two rectifier bridges were capacitively coupled to the eight-stage multiplier to obtain 0.5 and 0.65 a dc constant current outputs referenced to +1200 Vdc. Total power was 120 watts, with an overall efficiency of 85 percent at the 80 kHz operating frequency. All outputs were regulated to three percent or better, with complete short circuit protection. The power conditioner component weight and efficiency were compared to the equivalent four outputs of the 10 kHz conditioner for the 8 cm ion engine. Weight reduction for the four outputs was 557 grams; extrapolated in the same ratio to all nine outputs, it would be 1100 to 1400 grams.

  2. Pyrolysis characteristics of typical biomass thermoplastic composites

    NASA Astrophysics Data System (ADS)

    Cai, Hongzhen; Ba, Ziyu; Yang, Keyan; Zhang, Qingfa; Zhao, Kunpeng; Gu, Shiyan

    The biomass thermoplastic composites were prepared by extrusion molding method with poplar flour, rice husk, cotton stalk and corn stalk. The thermo gravimetric analyzer (TGA) has also been used for evaluating the pyrolysis process of the composites. The results showed that the pyrolysis process mainly consists of two stages: biomass pyrolysis and the plastic pyrolysis. The increase of biomass content in the composite raised the first stage pyrolysis peak temperature. However, the carbon residue was reduced and the pyrolysis efficiency was better because of synergistic effect of biomass and plastic. The composite with different kinds of biomass have similar pyrolysis process, and the pyrolysis efficiency of the composite with corn stalk was best. The calcium carbonate could inhibit pyrolysis process and increase the first stage pyrolysis peak temperature and carbon residue as a filling material of the composite.

  3. Fast Track Lunar NTR Systems Assessment for NASA's First Lunar Outpost and Its Evolvability to Mars

    NASA Technical Reports Server (NTRS)

    Borowski, Stanley K.; Alexander, Stephen W.

    1995-01-01

    Integrated systems and missions studies are presented for an evolutionary lunar-to-Mars space transportation system (STS) based on nuclear thermal rocket (NTR) technology. A 'standardized' set of engine and stage components are identified and used in a 'building block' fashion to configure a variety of piloted and cargo, lunar and Mars vehicles. The reference NTR characteristics include a thrust of 50 thousand pounds force (klbf), specific impulse (I(sub sp)) of 900 seconds, and an engine thrust-to-weight ratio of 4. 3. For the National Aeronautics and Space Administrations (NASA) First Lunar Outpost (FLO) mission, and expendable NTR stage powered by two such engines can deliver approximately 96 metric tonnes (t) to trans-lunar injection (TLI) conditions for an initial mass in low Earth orbit (IMLEO) of approximately 198 t compared to 250 t for a cryogenic chemical system. The stage liquid hydrogen (LH2) tank has a diameter, length, and capacity of 10 m, 14.5 m and 66 t, respectively. By extending the stage length and LH2 capacity to approximately 20 m and 96 t, a single launch Mars cargo vehicle could deliver to an elliptical Mars parking orbit a 63 t Mars excursion vehicle (MEV) with a 45 t surface payload. Three 50 klbf engines and the two standardized LH2 tanks developed for the lunar and Mars cargo vehicles are used to configure the vehicles supporting piloted Mars missions as early as 2010. The 'modular' NTR vehicle approach forms the basis for an efficient STS able to handle the needs of a wide spectrum of lunar and Mars missions.

  4. Efficiency at Faculties of Economics in the Czech Public Higher Education Institutions: Two Different Approaches

    ERIC Educational Resources Information Center

    Flégl, Martin; Vltavská, Kristýna

    2013-01-01

    The paper evaluates research and teaching efficiency at faculties of economics in the public higher education institutions in the Czech Republic. Evaluation is provided in two periods between the years 2006-2010 and 2007-2011. For this evaluation the Data Envelopment Analysis and Index approach are used. Data Envelopment Analysis measures research…

  5. Single-stage versus two-stage anaerobic fluidized bed bioreactors in treating municipal wastewater: Performance, foulant characteristics, and microbial community.

    PubMed

    Wu, Bing; Li, Yifei; Lim, Weikang; Lee, Shi Lin; Guo, Qiming; Fane, Anthony G; Liu, Yu

    2017-03-01

    This study examined the receptive performance, membrane foulant characteristics, and microbial community in the single-stage and two-stage anaerobic fluidized membrane bioreactor (AFMBR) treating settled raw municipal wastewater with the aims to explore fouling mechanisms and microbial community structure in both systems. Both AFMBRs exhibited comparable organic removal efficiency and membrane performances. In the single-stage AFMBR, less soluble organic substances were removed through biosorption by GAC and biodegradation than those in the two-stage AFMBR. Compared to the two-stage AFMBR, the formation of cake layer was the main cause of the observed membrane fouling in the single-stage AFMBR at the same employed flux. The accumulation rate of the biopolymers was linearly correlated with the membrane fouling rate. In the chemical-cleaned foulants, humic acid-like substances and silicon were identified as the predominant organic and inorganic fouants respectively. As such, the fluidized GAC particles might not be effective in removing these substances from the membrane surfaces. High-throughout pyrosequencing analysis further revealed that beta-Proteobacteria were predominant members in both AFMBRs, which contributed to the development of biofilms on the fluidized GAC and membrane surfaces. However, it was also noted that the abundance of the identified dominant in the membrane surface-associated biofilm seemed to be related to the permeate flux and reactor configuration. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Reduced Boil-Off System Sizing

    NASA Technical Reports Server (NTRS)

    Guzik, Monica C.; Plachta, David W.; Feller, Jeffrey R.

    2015-01-01

    NASA is currently developing cryogenic propellant storage and transfer systems for future space exploration and scientific discovery missions by addressing the need to raise the technology readiness level of cryogenic fluid management technologies. Cryogenic propellants are baselined in many propulsion systems due to their inherently high specific impulse; however, their low boiling points can cause substantial boil-off losses over time. Recent efforts such as the Reduced Boil-off Testing and the Active Thermal Control Scaling Study provide important information on the benefit of an active cooling system applied to LH2 propellant storage. Findings show that zero-boil off technologies can reduce overall mass in LH2 storage systems when low Earth orbit loiter periods extend beyond two months. A significant part of this mass reduction is realized by integrating two stages of cooling: a 20 K stage to intercept heat at the tank surface, and a 90 K stage to reduce the heat entering the less efficient 20 K stage. A missing element in previous studies, which is addressed in this paper, is the development of a direct method for sizing the 90 K cooling stage. Such a method requires calculation of the heat entering both the 90 K and 20 K stages as compared to the overall system masses, and is reliant upon the temperature distribution, performance, and unique design characteristics of the system in question. By utilizing the known conductance of a system without active thermal control, the heat being intercepted by a 90 K stage can be calculated to find the resultant lift and mass of each active thermal control stage. Integral to this is the thermal conductance of the cooling straps and the broad area cooling shield, key parts of the 90 K stage. Additionally, a trade study is performed to show the ability of the 90 K cooling stage to reduce the lift on the 20 K cryocooler stage, which is considerably less developed and efficient than 90 K cryocoolers.

  7. Performance assessment of two-stage anaerobic digestion of kitchen wastes.

    PubMed

    Bo, Zhang; Pin-Jing, He

    2014-01-01

    This study is aimed at investigating the performance of the two-phase anaerobic digestion of kitchen wastes in a lab-scale setup. The semi-continuous experiment showed that the two-phase anaerobic digestion of kitchen wastes had a bioconversion rate of 83%, biogas yield of 338 mL x (g chemical oxygen demand (COD))(-1) and total solid conversion of 63% when the entire two-phase anaerobic digestion process was subjected to an organic loading rate (OLR) of 10.7 g x (L d)(-1). In the hydrolysis-acidogenesis process, the efficiency of solubilization decreased from 72.6% to 41.1%, and the acidogenesis efficiency decreased from 31.8% to 17.8% with an increase in the COD loading rate. On the other hand, the performance of the subsequent methanogenic process was not susceptible to the increase in the feeding COD loading rate in the hydrolysis-acidogenesis stage. Lactic acid was one of the main fermentation products, accounting for over 40% of the total soluble COD in the fermentation liquid. The batch experiments indicated that the lactic acid was the earliest predominant fermentation product, and distributions of fermentation products were pH dependent. Results showed that increasing the feeding OLR of kitchen wastes made the two-stage anaerobic digestion process more effective. Moreover, there was a potential improvement in the performance of anaerobic digestion of kitchen wastes with a corresponding improvement in the hydrolysis process.

  8. Two stage surgical procedure for root coverage

    PubMed Central

    George, Anjana Mary; Rajesh, K. S.; Hegde, Shashikanth; Kumar, Arun

    2012-01-01

    Gingival recession may present problems that include root sensitivity, esthetic concern, and predilection to root caries, cervical abrasion and compromising of a restorative effort. When marginal tissue health cannot be maintained and recession is deep, the need for treatment arises. This literature has documented that recession can be successfully treated by means of a two stage surgical approach, the first stage consisting of creation of attached gingiva by means of free gingival graft, and in the second stage, a lateral sliding flap of grafted tissue to cover the recession. This indirect technique ensures development of an adequate width of attached gingiva. The outcome of this technique suggests that two stage surgical procedures are highly predictable for root coverage in case of isolated deep recession and lack of attached gingiva. PMID:23162343

  9. Periscope: quantitative prediction of soluble protein expression in the periplasm of Escherichia coli

    NASA Astrophysics Data System (ADS)

    Chang, Catherine Ching Han; Li, Chen; Webb, Geoffrey I.; Tey, Bengti; Song, Jiangning; Ramanan, Ramakrishnan Nagasundara

    2016-03-01

    Periplasmic expression of soluble proteins in Escherichia coli not only offers a much-simplified downstream purification process, but also enhances the probability of obtaining correctly folded and biologically active proteins. Different combinations of signal peptides and target proteins lead to different soluble protein expression levels, ranging from negligible to several grams per litre. Accurate algorithms for rational selection of promising candidates can serve as a powerful tool to complement with current trial-and-error approaches. Accordingly, proteomics studies can be conducted with greater efficiency and cost-effectiveness. Here, we developed a predictor with a two-stage architecture, to predict the real-valued expression level of target protein in the periplasm. The output of the first-stage support vector machine (SVM) classifier determines which second-stage support vector regression (SVR) classifier to be used. When tested on an independent test dataset, the predictor achieved an overall prediction accuracy of 78% and a Pearson’s correlation coefficient (PCC) of 0.77. We further illustrate the relative importance of various features with respect to different models. The results indicate that the occurrence of dipeptide glutamine and aspartic acid is the most important feature for the classification model. Finally, we provide access to the implemented predictor through the Periscope webserver, freely accessible at http://lightning.med.monash.edu/periscope/.

  10. Commentary: considerations for using the 'Trials within Cohorts' design in a clinical trial of an investigational medicinal product.

    PubMed

    Bibby, Anna C; Torgerson, David J; Leach, Samantha; Lewis-White, Helen; Maskell, Nick A

    2018-01-08

    The 'trials within cohorts' (TwiC) design is a pragmatic approach to randomised trials in which trial participants are randomly selected from an existing cohort. The design has multiple potential benefits, including the option of conducting multiple trials within the same cohort. To date, the TwiC design methodology been used in numerous clinical settings but has never been applied to a clinical trial of an investigational medicinal product (CTIMP). We have recently secured the necessary approvals to undertake the first CTIMP using the TwiC design. In this paper, we describe some of the considerations and modifications required to ensure such a trial is compliant with Good Clinical Practice and international clinical trials regulations. We advocate using a two-stage consent process and using the consent stages to explicitly differentiate between trial participants and cohort participants who are providing control data. This distinction ensured compliance but had consequences with respect to costings, recruitment and the trial assessment schedule. We have demonstrated that it is possible to secure ethical and regulatory approval for a CTIMP TwiC. By including certain considerations at the trial design stage, we believe this pragmatic and efficient methodology could be utilised in other CTIMPs in future.

  11. Change of plans: an evaluation of the effectiveness and underlying mechanisms of successful talent transfer.

    PubMed

    Collins, Rosie; Collins, Dave; MacNamara, Aine; Jones, Martin Ian

    2014-01-01

    Talent transfer (TT) is a recently formalised process used to identify and develop talented athletes by selecting individuals who have already succeeded in one sport and transferring them to another. Despite the increasing popularity of TT amongst national organisations and sport governing body professionals, however, there is little empirical evidence as to its efficacy or how it may be most efficiently employed. Accordingly, this investigation was designed to gain a deeper understanding of the effectiveness and underlying mechanisms of TT, achieved through a two-part study. Stage 1 provided a quantitative analysis of the incidence and distribution or, in this respect, epidemiology of TT, finding the most popular transfer to be sprinting to bobsleigh, with an average transfer age of 19 years. Stage 2 scrutinised the TT process and explored the specific cases revealed in stage 1 by examining the perceptions of four sport science support specialists who had worked in TT settings, finding several emergent themes which, they felt, could explain the TT processes. The most prominent theme was the psychosocial mechanism of TT, an aspect currently missing from TT initiatives, suggesting that current TT systems are poorly structured and should redress their approach to develop a more integrated scheme that encompasses all potential mechanisms of transfer.

  12. Analysis of mixing conditions and multistage irradiation impact on NOx removal efficiency in the electron beam flue gas treatment process.

    PubMed

    Pawelec, Andrzej; Dobrowolski, Andrzej

    2017-01-01

    In the process of electron beam flue gas treatment (EBFGT), most energy is spent on NO x removal. The dose distribution in the reactor is not uniform and the flue gas flow pattern plays an important role in the process efficiency. It was found that proper construction of the reactor may increase the energy efficiency of the process. The impact of the number of irradiation stages and mixing conditions on NO x removal efficiency was investigated for an ideal case and a practical solution was presented and compared with previously known EBFGT reactor constructions. The research was performed by means of computational fluid dynamics methods in combination with empirical Wittig formula. Two versions of dose distribution were taken for calculations. The results of the research show that for an ideal case, application of multistage irradiation and interstage mixing may reduce the energy consumption in the process by up to 39%. On the other side, simulation of reactor construction modification for two-stage irradiation results in 25% energy consumption reduction. The results of presented case study may be applied for improving the existing reactors and proper design of future installations.

  13. Super low NO.sub.x, high efficiency, compact firetube boiler

    DOEpatents

    Chojnacki, Dennis A.; Rabovitser, Iosif K.; Knight, Richard A.; Cygan, David F.; Korenberg, Jacob

    2005-12-06

    A firetube boiler furnace having two combustion sections and an in-line intermediate tubular heat transfer section between the two combustion sections and integral to the pressure vessel. This design provides a staged oxidant combustion apparatus with separate in-line combustion chambers for fuel-rich primary combustion and fuel-lean secondary combustion and sufficient cooling of the combustion products from the primary combustion such that when the secondary combustion oxidant is added in the secondary combustion stage, the NO.sub.x formation is less than 5 ppmv at 3% O.sub.2.

  14. Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines

    NASA Astrophysics Data System (ADS)

    Massa, Luca

    A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.

  15. How to manage future groundwater resource of China under climate change and urbanization: An optimal stage investment design from modern portfolio theory.

    PubMed

    Hua, Shanshan; Liang, Jie; Zeng, Guangming; Xu, Min; Zhang, Chang; Yuan, Yujie; Li, Xiaodong; Li, Ping; Liu, Jiayu; Huang, Lu

    2015-11-15

    Groundwater management in China has been facing challenges from both climate change and urbanization and is considered as a national priority nowadays. However, unprecedented uncertainty exists in future scenarios making it difficult to formulate management planning paradigms. In this paper, we apply modern portfolio theory (MPT) to formulate an optimal stage investment of groundwater contamination remediation in China. This approach generates optimal weights of investment to each stage of the groundwater management and helps maximize expected return while minimizing overall risk in the future. We find that the efficient frontier of investment displays an upward-sloping shape in risk-return space. The expected value of groundwater vulnerability index increases from 0.6118 to 0.6230 following with the risk of uncertainty increased from 0.0118 to 0.0297. If management investment is constrained not to exceed certain total cost until 2050 year, the efficient frontier could help decision makers make the most appropriate choice on the trade-off between risk and return. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Robustness-Based Design Optimization Under Data Uncertainty

    NASA Technical Reports Server (NTRS)

    Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence

    2010-01-01

    This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.

  17. [Ultrasound in monitoring of the second stage of labour].

    PubMed

    Fouché, C J; Simon, E G; Potin, J; Perrotin, F

    2012-11-01

    In the second stage of labor, fetal head rotation and fetal head position are determinant for the management of labor to attempt a vaginal delivery or a cesarean section. However, digital examination is highly subjective. Nowadays, delivery rooms are often equipped with compact and high performance ultrasound systems. The clinical examination can be easily completed by quantified and reproducible methods. Transabdominal ultrasonography is a well-known and efficient way to determine the fetal head position. Nevertheless, ultrasound approach to assess fetal head descent is less widespread. We can use translabial or transperineal way to evaluate fetal head position. We describe precisely two different types of methods: the linear methods (3 different types) and the angles of progression (4 different types of measurement). Among all those methods, the main pelvic landmarks are the symphysis pubis and the fetal skull. The angle of progression appears promising but the assessment was restricted to occipitoanterior fetal position cases. In the coming years, ultrasound will likely play a greater role in the management of labor. Copyright © 2012 Elsevier Masson SAS. All rights reserved.

  18. Payment schemes and cost efficiency: evidence from Swiss public hospitals.

    PubMed

    Meyer, Stefan

    2015-03-01

    This paper aims at analysing the impact of prospective payment schemes on cost efficiency of acute care hospitals in Switzerland. We study a panel of 121 public hospitals subject to one of four payment schemes. While several hospitals are still reimbursed on a per diem basis for the treatment of patients, most face flat per-case rates-or mixed schemes, which combine both elements of reimbursement. Thus, unlike previous studies, we are able to simultaneously analyse and isolate the cost-efficiency effects of different payment schemes. By means of stochastic frontier analysis, we first estimate a hospital cost frontier. Using the two-stage approach proposed by Battese and Coelli (Empir Econ 20:325-332, 1995), we then analyse the impact of these payment schemes on the cost efficiency of hospitals. Controlling for hospital characteristics, local market conditions in the 26 Swiss states (cantons), and a time trend, we show that, compared to per diem, hospitals which are reimbursed by flat payment schemes perform better in terms of cost efficiency. Our results suggest that mixed schemes create incentives for cost containment as well, although to a lesser extent. In addition, our findings indicate that cost-efficient hospitals are primarily located in cantons with competitive markets, as measured by the Herfindahl-Hirschman index in inpatient care. Furthermore, our econometric model shows that we obtain biased estimates from frontier analysis if we do not account for heteroscedasticity in the inefficiency term.

  19. Agrobacterium-mediated transformation of two Serbian potato cultivars (Solanum tuberosum L. cv. Dragacevka and cv. Jelica)

    USDA-ARS?s Scientific Manuscript database

    An efficient protocol for Agrobacterium-mediated transformation of Serbian potato cultivars Dragacevka and Jelica, enabling the introduction of oryzacystatin genes OCI and OCII, was established. Starting with leaf explants a two-stage transformation protocol combining procedures of Webb and Wenzler...

  20. Multiple heavy metals extraction and recovery from hazardous electroplating sludge waste via ultrasonically enhanced two-stage acid leaching.

    PubMed

    Li, Chuncheng; Xie, Fengchun; Ma, Yang; Cai, Tingting; Li, Haiying; Huang, Zhiyuan; Yuan, Gaoqing

    2010-06-15

    An ultrasonically enhanced two-stage acid leaching process on extracting and recovering multiple heavy metals from actual electroplating sludge was studied in lab tests. It provided an effective technique for separation of valuable metals (Cu, Ni and Zn) from less valuable metals (Fe and Cr) in electroplating sludge. The efficiency of the process had been measured with the leaching efficiencies and recovery rates of the metals. Enhanced by ultrasonic power, the first-stage acid leaching demonstrated leaching rates of 96.72%, 97.77%, 98.00%, 53.03%, and 0.44% for Cu, Ni, Zn, Cr, and Fe respectively, effectively separated half of Cr and almost all of Fe from mixed metals. The subsequent second-stage leaching achieved leaching rates of 75.03%, 81.05%, 81.39%, 1.02%, and 0% for Cu, Ni, Zn, Cr, and Fe that further separated Cu, Ni, and Zn from mixed metals. With the stabilized two-stage ultrasonically enhanced leaching, the resulting over all recovery rates of Cu, Ni, Zn, Cr and Fe from electroplating sludge could be achieved at 97.42%, 98.46%, 98.63%, 98.32% and 100% respectively, with Cr and Fe in solids and the rest of the metals in an aqueous solution discharged from the leaching system. The process performance parameters studied were pH, ultrasonic power, and contact time. The results were also confirmed in an industrial pilot-scale test, and same high metal recoveries were performed. Copyright 2010 Elsevier B.V. All rights reserved.

Top