Sample records for simulation-based interval two-stage

  1. A queuing-theory-based interval-fuzzy robust two-stage programming model for environmental management under uncertainty

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Li, Y. P.; Huang, G. H.

    2012-06-01

    In this study, a queuing-theory-based interval-fuzzy robust two-stage programming (QB-IRTP) model is developed through introducing queuing theory into an interval-fuzzy robust two-stage (IRTP) optimization framework. The developed QB-IRTP model can not only address highly uncertain information for the lower and upper bounds of interval parameters but also be used for analysing a variety of policy scenarios that are associated with different levels of economic penalties when the promised targets are violated. Moreover, it can reflect uncertainties in queuing theory problems. The developed method has been applied to a case of long-term municipal solid waste (MSW) management planning. Interval solutions associated with different waste-generation rates, different waiting costs and different arriving rates have been obtained. They can be used for generating decision alternatives and thus help managers to identify desired MSW management policies under various economic objectives and system reliability constraints.

  2. An inventory-theory-based interval-parameter two-stage stochastic programming model for water resources management

    NASA Astrophysics Data System (ADS)

    Suo, M. Q.; Li, Y. P.; Huang, G. H.

    2011-09-01

    In this study, an inventory-theory-based interval-parameter two-stage stochastic programming (IB-ITSP) model is proposed through integrating inventory theory into an interval-parameter two-stage stochastic optimization framework. This method can not only address system uncertainties with complex presentation but also reflect transferring batch (the transferring quantity at once) and period (the corresponding cycle time) in decision making problems. A case of water allocation problems in water resources management planning is studied to demonstrate the applicability of this method. Under different flow levels, different transferring measures are generated by this method when the promised water cannot be met. Moreover, interval solutions associated with different transferring costs also have been provided. They can be used for generating decision alternatives and thus help water resources managers to identify desired policies. Compared with the ITSP method, the IB-ITSP model can provide a positive measure for solving water shortage problems and afford useful information for decision makers under uncertainty.

  3. Simulation and Analyses of Stage Separation Two-Stage Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, Kelly J.; Covell, Peter F.

    2005-01-01

    NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(registered Trademark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.

  4. Simulation and Analyses of Stage Separation of Two-Stage Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Neirynck, Thomas A.; Hotchko, Nathaniel J.; Tartabini, Paul V.; Scallion, William I.; Murphy, K. J.; Covell, Peter F.

    2007-01-01

    NASA has initiated the development of methodologies, techniques and tools needed for analysis and simulation of stage separation of next generation reusable launch vehicles. As a part of this activity, ConSep simulation tool is being developed which is a MATLAB-based front-and-back-end to the commercially available ADAMS(Registerd TradeMark) solver, an industry standard package for solving multi-body dynamic problems. This paper discusses the application of ConSep to the simulation and analysis of staging maneuvers of two-stage-to-orbit (TSTO) Bimese reusable launch vehicles, one staging at Mach 3 and the other at Mach 6. The proximity and isolated aerodynamic database were assembled using the data from wind tunnel tests conducted at NASA Langley Research Center. The effects of parametric variations in mass, inertia, flight path angle, altitude from their nominal values at staging were evaluated. Monte Carlo runs were performed for Mach 3 staging to evaluate the sensitivity to uncertainties in aerodynamic coefficients.

  5. Simulation-based power calculations for planning a two-stage individual participant data meta-analysis.

    PubMed

    Ensor, Joie; Burke, Danielle L; Snell, Kym I E; Hemming, Karla; Riley, Richard D

    2018-05-18

    Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has < 60% power to detect a reduction of 1 kg weight gain for a 10-unit increase in BMI. Additional IPD from ten other published trials (containing 1761 patients) would improve power to over 80%, but only if a fixed-effect meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. Simulation-based power

  6. Confidence interval estimation of the difference between two sensitivities to the early disease stage.

    PubMed

    Dong, Tuochuan; Kang, Le; Hutson, Alan; Xiong, Chengjie; Tian, Lili

    2014-03-01

    Although most of the statistical methods for diagnostic studies focus on disease processes with binary disease status, many diseases can be naturally classified into three ordinal diagnostic categories, that is normal, early stage, and fully diseased. For such diseases, the volume under the ROC surface (VUS) is the most commonly used index of diagnostic accuracy. Because the early disease stage is most likely the optimal time window for therapeutic intervention, the sensitivity to the early diseased stage has been suggested as another diagnostic measure. For the purpose of comparing the diagnostic abilities on early disease detection between two markers, it is of interest to estimate the confidence interval of the difference between sensitivities to the early diseased stage. In this paper, we present both parametric and non-parametric methods for this purpose. An extensive simulation study is carried out for a variety of settings for the purpose of evaluating and comparing the performance of the proposed methods. A real example of Alzheimer's disease (AD) is analyzed using the proposed approaches. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Statistical inference for extended or shortened phase II studies based on Simon's two-stage designs.

    PubMed

    Zhao, Junjun; Yu, Menggang; Feng, Xi-Ping

    2015-06-07

    Simon's two-stage designs are popular choices for conducting phase II clinical trials, especially in the oncology trials to reduce the number of patients placed on ineffective experimental therapies. Recently Koyama and Chen (2008) discussed how to conduct proper inference for such studies because they found that inference procedures used with Simon's designs almost always ignore the actual sampling plan used. In particular, they proposed an inference method for studies when the actual second stage sample sizes differ from planned ones. We consider an alternative inference method based on likelihood ratio. In particular, we order permissible sample paths under Simon's two-stage designs using their corresponding conditional likelihood. In this way, we can calculate p-values using the common definition: the probability of obtaining a test statistic value at least as extreme as that observed under the null hypothesis. In addition to providing inference for a couple of scenarios where Koyama and Chen's method can be difficult to apply, the resulting estimate based on our method appears to have certain advantage in terms of inference properties in many numerical simulations. It generally led to smaller biases and narrower confidence intervals while maintaining similar coverages. We also illustrated the two methods in a real data setting. Inference procedures used with Simon's designs almost always ignore the actual sampling plan. Reported P-values, point estimates and confidence intervals for the response rate are not usually adjusted for the design's adaptiveness. Proper statistical inference procedures should be used.

  8. A two-stage mixed-integer fuzzy programming with interval-valued membership functions approach for flood-diversion planning.

    PubMed

    Wang, S; Huang, G H

    2013-03-15

    Flood disasters have been extremely severe in recent decades, and they account for about one third of all natural catastrophes throughout the world. In this study, a two-stage mixed-integer fuzzy programming with interval-valued membership functions (TMFP-IMF) approach is developed for flood-diversion planning under uncertainty. TMFP-IMF integrates the fuzzy flexible programming, two-stage stochastic programming, and integer programming within a general framework. A concept of interval-valued fuzzy membership function is introduced to address complexities of system uncertainties. TMFP-IMF can not only deal with uncertainties expressed as fuzzy sets and probability distributions, but also incorporate pre-regulated water-diversion policies directly into its optimization process. TMFP-IMF is applied to a hypothetical case study of flood-diversion planning for demonstrating its applicability. Results indicate that reasonable solutions can be generated for binary and continuous variables. A variety of flood-diversion and capacity-expansion schemes can be obtained under four scenarios, which enable decision makers (DMs) to identify the most desired one based on their perceptions and attitudes towards the objective-function value and constraints. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Inconsistencies in Numerical Simulations of Dynamical Systems Using Interval Arithmetic

    NASA Astrophysics Data System (ADS)

    Nepomuceno, Erivelton G.; Peixoto, Márcia L. C.; Martins, Samir A. M.; Rodrigues, Heitor M.; Perc, Matjaž

    Over the past few decades, interval arithmetic has been attracting widespread interest from the scientific community. With the expansion of computing power, scientific computing is encountering a noteworthy shift from floating-point arithmetic toward increased use of interval arithmetic. Notwithstanding the significant reliability of interval arithmetic, this paper presents a theoretical inconsistency in a simulation of dynamical systems using a well-known implementation of arithmetic interval. We have observed that two natural interval extensions present an empty intersection during a finite time range, which is contrary to the fundamental theorem of interval analysis. We have proposed a procedure to at least partially overcome this problem, based on the union of the two generated pseudo-orbits. This paper also shows a successful case of interval arithmetic application in the reduction of interval width size on the simulation of discrete map. The implications of our findings on the reliability of scientific computing using interval arithmetic have been properly addressed using two numerical examples.

  10. Sensitivity of diabetic retinopathy associated vision loss to screening interval in an agent-based/discrete event simulation model.

    PubMed

    Day, T Eugene; Ravi, Nathan; Xian, Hong; Brugh, Ann

    2014-04-01

    To examine the effect of changes to screening interval on the incidence of vision loss in a simulated cohort of Veterans with diabetic retinopathy (DR). This simulation allows us to examine potential interventions without putting patients at risk. Simulated randomized controlled trial. We develop a hybrid agent-based/discrete event simulation which incorporates a population of simulated Veterans--using abstracted data from a retrospective cohort of real-world diabetic Veterans--with a discrete event simulation (DES) eye clinic at which it seeks treatment for DR. We compare vision loss under varying screening policies, in a simulated population of 5000 Veterans over 50 independent ten-year simulation runs for each group. Diabetic Retinopathy associated vision loss increased as the screening interval was extended from one to five years (p<0.0001). This increase was concentrated in the third year of the screening interval (p<0.01). There was no increase in vision loss associated with increasing the screening interval from one year to two years (p=0.98). Increasing the screening interval for diabetic patients who have not yet developed diabetic retinopathy from 1 to 2 years appears safe, while increasing the interval to 3 years heightens risk for vision loss. Published by Elsevier Ltd.

  11. Numerical simulation and analysis of the flow in a two-staged axial fan

    NASA Astrophysics Data System (ADS)

    Xu, J. Q.; Dou, H. S.; Jia, H. X.; Chen, X. P.; Wei, Y. K.; Dong, M. W.

    2016-05-01

    In this paper, numerical simulation was performed for the internal three-dimensional turbulent flow field in the two-stage axial fan using steady three-dimensional in-compressible Navier-Stokes equations coupled with the Realizable turbulent model. The numerical simulation results of the steady analysis were combined with the flow characteristics of two- staged axial fan, the influence of the mutual effect between the blade and the vane on the flow of the two inter-stages was analyzed emphatically. This paper studied how the flow field distribution in inter-stage is influenced by the wake interaction and potential flow interaction of mutual effect in the impeller-vane inter-stage and the vane-impeller inter-stage. The results showed that: Relatively, wake interaction has an advantage over potential flow interaction in the impeller-vane inter-stage; potential flow interaction has an advantage over wake interaction in the vane-impeller inter-stage. In other words, distribution of flow field in the two interstages is determined by the rotating component.

  12. Health care planning and education via gaming-simulation: a two-stage experiment.

    PubMed

    Gagnon, J H; Greenblat, C S

    1977-01-01

    A two-stage process of gaming-simulation design was conducted: the first stage of design concerned national planning for hemophilia care; the second stage of design was for gaming-simulation concerning the problems of hemophilia patients and health care providers. The planning design was intended to be adaptable to large-scale planning for a variety of health care problems. The educational game was designed using data developed in designing the planning game. A broad range of policy-makers participated in the planning game.

  13. Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment

    NASA Astrophysics Data System (ADS)

    Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.

    2017-03-01

    Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.

  14. Characterisation of two-stage ignition in diesel engine-relevant thermochemical conditions using direct numerical simulation

    DOE PAGES

    Krisman, Alex; Hawkes, Evatt R.; Talei, Mohsen; ...

    2016-08-30

    With the goal of providing a more detailed fundamental understanding of ignition processes in diesel engines, this study reports analysis of a direct numerical simulation (DNS) database. In the DNS, a pseudo turbulent mixing layer of dimethyl ether (DME) at 400 K and air at 900 K is simulated at a pressure of 40 atmospheres. At these conditions, DME exhibits a two-stage ignition and resides within the negative temperature coefficient (NTC) regime of ignition delay times, similar to diesel fuel. The analysis reveals a complex ignition process with several novel features. Autoignition occurs as a distributed, two-stage event. The high-temperaturemore » stage of ignition establishes edge flames that have a hybrid premixed/autoignition flame structure similar to that previously observed for lifted laminar flames at similar thermochemical conditions. In conclusion, a combustion mode analysis based on key radical species illustrates the multi-stage and multi-mode nature of the ignition process and highlights the substantial modelling challenge presented by diesel combustion.« less

  15. Joint modelling compared with two stage methods for analysing longitudinal data and prospective outcomes: A simulation study of childhood growth and BP.

    PubMed

    Sayers, A; Heron, J; Smith, Adac; Macdonald-Wallis, C; Gilthorpe, M S; Steele, F; Tilling, K

    2017-02-01

    There is a growing debate with regards to the appropriate methods of analysis of growth trajectories and their association with prospective dependent outcomes. Using the example of childhood growth and adult BP, we conducted an extensive simulation study to explore four two-stage and two joint modelling methods, and compared their bias and coverage in estimation of the (unconditional) association between birth length and later BP, and the association between growth rate and later BP (conditional on birth length). We show that the two-stage method of using multilevel models to estimate growth parameters and relating these to outcome gives unbiased estimates of the conditional associations between growth and outcome. Using simulations, we demonstrate that the simple methods resulted in bias in the presence of measurement error, as did the two-stage multilevel method when looking at the total (unconditional) association of birth length with outcome. The two joint modelling methods gave unbiased results, but using the re-inflated residuals led to undercoverage of the confidence intervals. We conclude that either joint modelling or the simpler two-stage multilevel approach can be used to estimate conditional associations between growth and later outcomes, but that only joint modelling is unbiased with nominal coverage for unconditional associations.

  16. A novel flow sensor based on resonant sensing with two-stage microleverage mechanism.

    PubMed

    Yang, B; Guo, X; Wang, Q H; Lu, C F; Hu, D

    2018-04-01

    The design, simulation, fabrication, and experiments of a novel flow sensor based on resonant sensing with a two-stage microleverage mechanism are presented in this paper. Different from the conventional detection methods for flow sensors, two differential resonators are adopted to implement air flow rate transformation through two-stage leverage magnification. The proposed flow sensor has a high sensitivity since the adopted two-stage microleverage mechanism possesses a higher amplification factor than a single-stage microleverage mechanism. The modal distribution and geometric dimension of the two-stage leverage mechanism and hair are analyzed and optimized by Ansys simulation. A digital closed-loop driving technique with a phase frequency detector-based coordinate rotation digital computer algorithm is implemented for the detection and locking of resonance frequency. The sensor fabricated by the standard deep dry silicon on a glass process has a device dimension of 5100 μm (length) × 5100 μm (width) × 100 μm (height) with a hair diameter of 1000 μm. The preliminary experimental results demonstrate that the maximal mechanical sensitivity of the flow sensor is approximately 7.41 Hz/(m/s) 2 at a resonant frequency of 22 kHz for the hair height of 9 mm and increases by 2.42 times as hair height extends from 3 mm to 9 mm. Simultaneously, a detection-limit of 3.23 mm/s air flow amplitude at 60 Hz is confirmed. The proposed flow sensor has great application prospects in the micro-autonomous system and technology, self-stabilizing micro-air vehicles, and environmental monitoring.

  17. A novel flow sensor based on resonant sensing with two-stage microleverage mechanism

    NASA Astrophysics Data System (ADS)

    Yang, B.; Guo, X.; Wang, Q. H.; Lu, C. F.; Hu, D.

    2018-04-01

    The design, simulation, fabrication, and experiments of a novel flow sensor based on resonant sensing with a two-stage microleverage mechanism are presented in this paper. Different from the conventional detection methods for flow sensors, two differential resonators are adopted to implement air flow rate transformation through two-stage leverage magnification. The proposed flow sensor has a high sensitivity since the adopted two-stage microleverage mechanism possesses a higher amplification factor than a single-stage microleverage mechanism. The modal distribution and geometric dimension of the two-stage leverage mechanism and hair are analyzed and optimized by Ansys simulation. A digital closed-loop driving technique with a phase frequency detector-based coordinate rotation digital computer algorithm is implemented for the detection and locking of resonance frequency. The sensor fabricated by the standard deep dry silicon on a glass process has a device dimension of 5100 μm (length) × 5100 μm (width) × 100 μm (height) with a hair diameter of 1000 μm. The preliminary experimental results demonstrate that the maximal mechanical sensitivity of the flow sensor is approximately 7.41 Hz/(m/s)2 at a resonant frequency of 22 kHz for the hair height of 9 mm and increases by 2.42 times as hair height extends from 3 mm to 9 mm. Simultaneously, a detection-limit of 3.23 mm/s air flow amplitude at 60 Hz is confirmed. The proposed flow sensor has great application prospects in the micro-autonomous system and technology, self-stabilizing micro-air vehicles, and environmental monitoring.

  18. Cost-effectiveness analysis of population-based screening of hepatocellular carcinoma: Comparing ultrasonography with two-stage screening

    PubMed Central

    Kuo, Ming-Jeng; Chen, Hsiu-Hsi; Chen, Chi-Ling; Fann, Jean Ching-Yuan; Chen, Sam Li-Sheng; Chiu, Sherry Yueh-Hsia; Lin, Yu-Min; Liao, Chao-Sheng; Chang, Hung-Chuen; Lin, Yueh-Shih; Yen, Amy Ming-Fang

    2016-01-01

    AIM: To assess the cost-effectiveness of two population-based hepatocellular carcinoma (HCC) screening programs, two-stage biomarker-ultrasound method and mass screening using abdominal ultrasonography (AUS). METHODS: In this study, we applied a Markov decision model with a societal perspective and a lifetime horizon for the general population-based cohorts in an area with high HCC incidence, such as Taiwan. The accuracy of biomarkers and ultrasonography was estimated from published meta-analyses. The costs of surveillance, diagnosis, and treatment were based on a combination of published literature, Medicare payments, and medical expenditure at the National Taiwan University Hospital. The main outcome measure was cost per life-year gained with a 3% annual discount rate. RESULTS: The results show that the mass screening using AUS was associated with an incremental cost-effectiveness ratio of USD39825 per life-year gained, whereas two-stage screening was associated with an incremental cost-effectiveness ratio of USD49733 per life-year gained, as compared with no screening. Screening programs with an initial screening age of 50 years old and biennial screening interval were the most cost-effective. These findings were sensitive to the costs of screening tools and the specificity of biomarker screening. CONCLUSION: Mass screening using AUS is more cost effective than two-stage biomarker-ultrasound screening. The most optimal strategy is an initial screening age at 50 years old with a 2-year inter-screening interval. PMID:27022228

  19. Interval sampling methods and measurement error: a computer simulation.

    PubMed

    Wirth, Oliver; Slaven, James; Taylor, Matthew A

    2014-01-01

    A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.

  20. Teaching basic life support with an automated external defibrillator using the two-stage or the four-stage teaching technique.

    PubMed

    Bjørnshave, Katrine; Krogh, Lise Q; Hansen, Svend B; Nebsbjerg, Mette A; Thim, Troels; Løfgren, Bo

    2018-02-01

    Laypersons often hesitate to perform basic life support (BLS) and use an automated external defibrillator (AED) because of self-perceived lack of knowledge and skills. Training may reduce the barrier to intervene. Reduced training time and costs may allow training of more laypersons. The aim of this study was to compare BLS/AED skills' acquisition and self-evaluated BLS/AED skills after instructor-led training with a two-stage versus a four-stage teaching technique. Laypersons were randomized to either two-stage or four-stage teaching technique courses. Immediately after training, the participants were tested in a simulated cardiac arrest scenario to assess their BLS/AED skills. Skills were assessed using the European Resuscitation Council BLS/AED assessment form. The primary endpoint was passing the test (17 of 17 skills adequately performed). A prespecified noninferiority margin of 20% was used. The two-stage teaching technique (n=72, pass rate 57%) was noninferior to the four-stage technique (n=70, pass rate 59%), with a difference in pass rates of -2%; 95% confidence interval: -18 to 15%. Neither were there significant differences between the two-stage and four-stage groups in the chest compression rate (114±12 vs. 115±14/min), chest compression depth (47±9 vs. 48±9 mm) and number of sufficient rescue breaths between compression cycles (1.7±0.5 vs. 1.6±0.7). In both groups, all participants believed that their training had improved their skills. Teaching laypersons BLS/AED using the two-stage teaching technique was noninferior to the four-stage teaching technique, although the pass rate was -2% (95% confidence interval: -18 to 15%) lower with the two-stage teaching technique.

  1. An efficient two-stage approach for image-based FSI analysis of atherosclerotic arteries

    PubMed Central

    Rayz, Vitaliy L.; Mofrad, Mohammad R. K.; Saloner, David

    2010-01-01

    Patient-specific biomechanical modeling of atherosclerotic arteries has the potential to aid clinicians in characterizing lesions and determining optimal treatment plans. To attain high levels of accuracy, recent models use medical imaging data to determine plaque component boundaries in three dimensions, and fluid–structure interaction is used to capture mechanical loading of the diseased vessel. As the plaque components and vessel wall are often highly complex in shape, constructing a suitable structured computational mesh is very challenging and can require a great deal of time. Models based on unstructured computational meshes require relatively less time to construct and are capable of accurately representing plaque components in three dimensions. These models unfortunately require additional computational resources and computing time for accurate and meaningful results. A two-stage modeling strategy based on unstructured computational meshes is proposed to achieve a reasonable balance between meshing difficulty and computational resource and time demand. In this method, a coarsegrained simulation of the full arterial domain is used to guide and constrain a fine-scale simulation of a smaller region of interest within the full domain. Results for a patient-specific carotid bifurcation model demonstrate that the two-stage approach can afford a large savings in both time for mesh generation and time and resources needed for computation. The effects of solid and fluid domain truncation were explored, and were shown to minimally affect accuracy of the stress fields predicted with the two-stage approach. PMID:19756798

  2. Modeling of a Sequential Two-Stage Combustor

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.; Liu, N.-S.; Gallagher, J. R.; Ryder, R. C.; Brankovic, A.; Hendricks, J. A.

    2005-01-01

    A sequential two-stage, natural gas fueled power generation combustion system is modeled to examine the fundamental aerodynamic and combustion characteristics of the system. The modeling methodology includes CAD-based geometry definition, and combustion computational fluid dynamics analysis. Graphical analysis is used to examine the complex vortical patterns in each component, identifying sources of pressure loss. The simulations demonstrate the importance of including the rotating high-pressure turbine blades in the computation, as this results in direct computation of combustion within the first turbine stage, and accurate simulation of the flow in the second combustion stage. The direct computation of hot-streaks through the rotating high-pressure turbine stage leads to improved understanding of the aerodynamic relationships between the primary and secondary combustors and the turbomachinery.

  3. A Two-Stage Multi-Agent Based Assessment Approach to Enhance Students' Learning Motivation through Negotiated Skills Assessment

    ERIC Educational Resources Information Center

    Chadli, Abdelhafid; Bendella, Fatima; Tranvouez, Erwan

    2015-01-01

    In this paper we present an Agent-based evaluation approach in a context of Multi-agent simulation learning systems. Our evaluation model is based on a two stage assessment approach: (1) a Distributed skill evaluation combining agents and fuzzy sets theory; and (2) a Negotiation based evaluation of students' performance during a training…

  4. A Bayesian-based two-stage inexact optimization method for supporting stream water quality management in the Three Gorges Reservoir region.

    PubMed

    Hu, X H; Li, Y P; Huang, G H; Zhuang, X W; Ding, X W

    2016-05-01

    In this study, a Bayesian-based two-stage inexact optimization (BTIO) method is developed for supporting water quality management through coupling Bayesian analysis with interval two-stage stochastic programming (ITSP). The BTIO method is capable of addressing uncertainties caused by insufficient inputs in water quality model as well as uncertainties expressed as probabilistic distributions and interval numbers. The BTIO method is applied to a real case of water quality management for the Xiangxi River basin in the Three Gorges Reservoir region to seek optimal water quality management schemes under various uncertainties. Interval solutions for production patterns under a range of probabilistic water quality constraints have been generated. Results obtained demonstrate compromises between the system benefit and the system failure risk due to inherent uncertainties that exist in various system components. Moreover, information about pollutant emission is accomplished, which would help managers to adjust production patterns of regional industry and local policies considering interactions of water quality requirement, economic benefit, and industry structure.

  5. Optimal land use management for soil erosion control by using an interval-parameter fuzzy two-stage stochastic programming approach.

    PubMed

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 10(9) $ was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  6. Optimal Land Use Management for Soil Erosion Control by Using an Interval-Parameter Fuzzy Two-Stage Stochastic Programming Approach

    NASA Astrophysics Data System (ADS)

    Han, Jing-Cheng; Huang, Guo-He; Zhang, Hua; Li, Zhong

    2013-09-01

    Soil erosion is one of the most serious environmental and public health problems, and such land degradation can be effectively mitigated through performing land use transitions across a watershed. Optimal land use management can thus provide a way to reduce soil erosion while achieving the maximum net benefit. However, optimized land use allocation schemes are not always successful since uncertainties pertaining to soil erosion control are not well presented. This study applied an interval-parameter fuzzy two-stage stochastic programming approach to generate optimal land use planning strategies for soil erosion control based on an inexact optimization framework, in which various uncertainties were reflected. The modeling approach can incorporate predefined soil erosion control policies, and address inherent system uncertainties expressed as discrete intervals, fuzzy sets, and probability distributions. The developed model was demonstrated through a case study in the Xiangxi River watershed, China's Three Gorges Reservoir region. Land use transformations were employed as decision variables, and based on these, the land use change dynamics were yielded for a 15-year planning horizon. Finally, the maximum net economic benefit with an interval value of [1.197, 6.311] × 109 was obtained as well as corresponding land use allocations in the three planning periods. Also, the resulting soil erosion amount was found to be decreased and controlled at a tolerable level over the watershed. Thus, results confirm that the developed model is a useful tool for implementing land use management as not only does it allow local decision makers to optimize land use allocation, but can also help to answer how to accomplish land use changes.

  7. Confidence intervals for the first crossing point of two hazard functions.

    PubMed

    Cheng, Ming-Yen; Qiu, Peihua; Tan, Xianming; Tu, Dongsheng

    2009-12-01

    The phenomenon of crossing hazard rates is common in clinical trials with time to event endpoints. Many methods have been proposed for testing equality of hazard functions against a crossing hazards alternative. However, there has been relatively few approaches available in the literature for point or interval estimation of the crossing time point. The problem of constructing confidence intervals for the first crossing time point of two hazard functions is considered in this paper. After reviewing a recent procedure based on Cox proportional hazard modeling with Box-Cox transformation of the time to event, a nonparametric procedure using the kernel smoothing estimate of the hazard ratio is proposed. The proposed procedure and the one based on Cox proportional hazard modeling with Box-Cox transformation of the time to event are both evaluated by Monte-Carlo simulations and applied to two clinical trial datasets.

  8. A manufacturing quality assessment model based-on two stages interval type-2 fuzzy logic

    NASA Astrophysics Data System (ADS)

    Purnomo, Muhammad Ridwan Andi; Helmi Shintya Dewi, Intan

    2016-01-01

    This paper presents the development of an assessment models for manufacturing quality using Interval Type-2 Fuzzy Logic (IT2-FL). The proposed model is developed based on one of building block in sustainable supply chain management (SSCM), which is benefit of SCM, and focuses more on quality. The proposed model can be used to predict the quality level of production chain in a company. The quality of production will affect to the quality of product. Practically, quality of production is unique for every type of production system. Hence, experts opinion will play major role in developing the assessment model. The model will become more complicated when the data contains ambiguity and uncertainty. In this study, IT2-FL is used to model the ambiguity and uncertainty. A case study taken from a company in Yogyakarta shows that the proposed manufacturing quality assessment model can work well in determining the quality level of production.

  9. Point estimation following two-stage adaptive threshold enrichment clinical trials.

    PubMed

    Kimani, Peter K; Todd, Susan; Renfro, Lindsay A; Stallard, Nigel

    2018-05-31

    Recently, several study designs incorporating treatment effect assessment in biomarker-based subpopulations have been proposed. Most statistical methodologies for such designs focus on the control of type I error rate and power. In this paper, we have developed point estimators for clinical trials that use the two-stage adaptive enrichment threshold design. The design consists of two stages, where in stage 1, patients are recruited in the full population. Stage 1 outcome data are then used to perform interim analysis to decide whether the trial continues to stage 2 with the full population or a subpopulation. The subpopulation is defined based on one of the candidate threshold values of a numerical predictive biomarker. To estimate treatment effect in the selected subpopulation, we have derived unbiased estimators, shrinkage estimators, and estimators that estimate bias and subtract it from the naive estimate. We have recommended one of the unbiased estimators. However, since none of the estimators dominated in all simulation scenarios based on both bias and mean squared error, an alternative strategy would be to use a hybrid estimator where the estimator used depends on the subpopulation selected. This would require a simulation study of plausible scenarios before the trial. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  10. Development of gestation-specific reference intervals for thyroid hormones in normal pregnant Northeast Chinese women: What is the rational division of gestation stages for establishing reference intervals for pregnancy women?

    PubMed

    Liu, Jianhua; Yu, Xiaojun; Xia, Meng; Cai, Hong; Cheng, Guixue; Wu, Lina; Li, Qiang; Zhang, Ying; Sheng, Mengyuan; Liu, Yong; Qin, Xiaosong

    2017-04-01

    A laboratory- and region-specific trimester-related reference interval for thyroid hormone assessment of pregnant women was recommended. Whether the division by trimester is suitable requires verification. Here, we tried to establish appropriate reference intervals of thyroid-related hormones and antibodies for normal pregnant women in Northeast China. A total of 947 pregnant women who underwent routine prenatal care were grouped via two methods. The first method entailed division by trimester: stages T1, T2, and T3. The second method entailed dividing T1, T2, and T3 stages into two stages each: T1-1, T1-2, T2-1, T2-2, T3-1, and T3-2. Serum levels of TSH, FT3, FT4, Anti-TPO, and Anti-TG were measured by three detection systems. No significant differences were found in TSH values between T1-1 group and the non-pregnant women group. However, the TSH value of the T1-1 group was significantly higher than that of T1-2 group (P<0.05). The TSH values in stage T3-2 increased significantly compared to those in stage T3-1 measured by three different assays (P<0.05). FT4 and FT3 values decreased significantly in the T2-1 and T2-2 stages compared to the previous stage (P<0.05). The serum levels of Anti-TPO and Anti-TG were not having significant differences between the six stages. The diagnosis and treatment of thyroid dysfunction during pregnancy should base on pregnancy- and method-specific reference intervals. More detailed staging is required to assess the thyroid function of pregnant women before 20 gestational weeks. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  11. Performance of toxicity probability interval based designs in contrast to the continual reassessment method

    PubMed Central

    Horton, Bethany Jablonski; Wages, Nolan A.; Conaway, Mark R.

    2016-01-01

    Toxicity probability interval designs have received increasing attention as a dose-finding method in recent years. In this study, we compared the two-stage, likelihood-based continual reassessment method (CRM), modified toxicity probability interval (mTPI), and the Bayesian optimal interval design (BOIN) in order to evaluate each method's performance in dose selection for Phase I trials. We use several summary measures to compare the performance of these methods, including percentage of correct selection (PCS) of the true maximum tolerable dose (MTD), allocation of patients to doses at and around the true MTD, and an accuracy index. This index is an efficiency measure that describes the entire distribution of MTD selection and patient allocation by taking into account the distance between the true probability of toxicity at each dose level and the target toxicity rate. The simulation study considered a broad range of toxicity curves and various sample sizes. When considering PCS, we found that CRM outperformed the two competing methods in most scenarios, followed by BOIN, then mTPI. We observed a similar trend when considering the accuracy index for dose allocation, where CRM most often outperformed both the mTPI and BOIN. These trends were more pronounced with increasing number of dose levels. PMID:27435150

  12. One-stage versus two-stage exchange arthroplasty for infected total knee arthroplasty: a systematic review.

    PubMed

    Nagra, Navraj S; Hamilton, Thomas W; Ganatra, Sameer; Murray, David W; Pandit, Hemant

    2016-10-01

    Infection complicating total knee arthroplasty (TKA) has serious implications. Traditionally the debate on whether one- or two-stage exchange arthroplasty is the optimum management of infected TKA has favoured two-stage procedures; however, a paradigm shift in opinion is emerging. This study aimed to establish whether current evidence supports one-stage revision for managing infected TKA based on reinfection rates and functional outcomes post-surgery. MEDLINE/PubMed and CENTRAL databases were reviewed for studies that compared one- and two-stage exchange arthroplasty TKA in more than ten patients with a minimum 2-year follow-up. From an initial sample of 796, five cohort studies with a total of 231 patients (46 single-stage/185 two-stage; median patient age 66 years, range 61-71 years) met inclusion criteria. Overall, there were no significant differences in risk of reinfection following one- or two-stage exchange arthroplasty (OR -0.06, 95 % confidence interval -0.13, 0.01). Subgroup analysis revealed that in studies published since 2000, one-stage procedures have a significantly lower reinfection rate. One study investigated functional outcomes and reported that one-stage surgery was associated with superior functional outcomes. Scarcity of data, inconsistent study designs, surgical technique and antibiotic regime disparities limit recommendations that can be made. Recent studies suggest one-stage exchange arthroplasty may provide superior outcomes, including lower reinfection rates and superior function, in select patients. Clinically, for some patients, one-stage exchange arthroplasty may represent optimum treatment; however, patient selection criteria and key components of surgical and post-operative anti-microbial management remain to be defined. III.

  13. Non-ideal magnetohydrodynamic simulations of the two-stage fragmentation model for cluster formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bailey, Nicole D.; Basu, Shantanu, E-mail: N.Bailey@leeds.ac.uk, E-mail: basu@uwo.ca

    2014-01-01

    We model molecular cloud fragmentation with thin-disk, non-ideal magnetohydrodynamic simulations that include ambipolar diffusion and partial ionization that transitions from primarily ultraviolet-dominated to cosmic-ray-dominated regimes. These simulations are used to determine the conditions required for star clusters to form through a two-stage fragmentation scenario. Recent linear analyses have shown that the fragmentation length scales and timescales can undergo a dramatic drop across the column density boundary that separates the ultraviolet- and cosmic-ray-dominated ionization regimes. As found in earlier studies, the absence of an ionization drop and regular perturbations leads to a single-stage fragmentation on pc scales in transcritical clouds, somore » that the nonlinear evolution yields the same fragment sizes as predicted by linear theory. However, we find that a combination of initial transcritical mass-to-flux ratio, evolution through a column density regime in which the ionization drop takes place, and regular small perturbations to the mass-to-flux ratio is sufficient to cause a second stage of fragmentation during the nonlinear evolution. Cores of size ∼0.1 pc are formed within an initial fragment of ∼pc size. Regular perturbations to the mass-to-flux ratio also accelerate the onset of runaway collapse.« less

  14. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  15. Uncertainty analysis of neural network based flood forecasting models: An ensemble based approach for constructing prediction interval

    NASA Astrophysics Data System (ADS)

    Kasiviswanathan, K.; Sudheer, K.

    2013-05-01

    Artificial neural network (ANN) based hydrologic models have gained lot of attention among water resources engineers and scientists, owing to their potential for accurate prediction of flood flows as compared to conceptual or physics based hydrologic models. The ANN approximates the non-linear functional relationship between the complex hydrologic variables in arriving at the river flow forecast values. Despite a large number of applications, there is still some criticism that ANN's point prediction lacks in reliability since the uncertainty of predictions are not quantified, and it limits its use in practical applications. A major concern in application of traditional uncertainty analysis techniques on neural network framework is its parallel computing architecture with large degrees of freedom, which makes the uncertainty assessment a challenging task. Very limited studies have considered assessment of predictive uncertainty of ANN based hydrologic models. In this study, a novel method is proposed that help construct the prediction interval of ANN flood forecasting model during calibration itself. The method is designed to have two stages of optimization during calibration: at stage 1, the ANN model is trained with genetic algorithm (GA) to obtain optimal set of weights and biases vector, and during stage 2, the optimal variability of ANN parameters (obtained in stage 1) is identified so as to create an ensemble of predictions. During the 2nd stage, the optimization is performed with multiple objectives, (i) minimum residual variance for the ensemble mean, (ii) maximum measured data points to fall within the estimated prediction interval and (iii) minimum width of prediction interval. The method is illustrated using a real world case study of an Indian basin. The method was able to produce an ensemble that has an average prediction interval width of 23.03 m3/s, with 97.17% of the total validation data points (measured) lying within the interval. The derived

  16. Assessing accuracy of point fire intervals across landscapes with simulation modelling

    Treesearch

    Russell A. Parsons; Emily K. Heyerdahl; Robert E. Keane; Brigitte Dorner; Joseph Fall

    2007-01-01

    We assessed accuracy in point fire intervals using a simulation model that sampled four spatially explicit simulated fire histories. These histories varied in fire frequency and size and were simulated on a flat landscape with two forest types (dry versus mesic). We used three sampling designs (random, systematic grids, and stratified). We assessed the sensitivity of...

  17. Two-stage atlas subset selection in multi-atlas based image segmentation.

    PubMed

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  18. Optimal debulking targets in women with advanced stage ovarian cancer: a retrospective study of immediate versus interval debulking surgery.

    PubMed

    Altman, Alon D; Nelson, Gregg; Chu, Pamela; Nation, Jill; Ghatage, Prafull

    2012-06-01

    The objective of this study was to examine both overall and disease-free survival of patients with advanced stage ovarian cancer after immediate or interval debulking surgery based on residual disease. We performed a retrospective chart review at the Tom Baker Cancer Centre in Calgary, Alberta of patients with pathologically confirmed stage III or IV ovarian cancer, fallopian tube cancer, or primary peritoneal cancer between 2003 and 2007. We collected data on the dates of diagnosis, recurrence, and death; cancer stage and grade, patients' age, surgery performed, and residual disease. One hundred ninety-two patients were included in the final analysis. The optimal debulking rate with immediate surgery was 64.8%, and with interval surgery it was 85.9%. There were improved overall and disease-free survival rates for optimally debulked disease (< 1 cm) with both immediate and interval surgery (P < 0.001) compared to suboptimally debulked disease. Overall survival rates for optimally debulked disease were not significantly different in patients having immediate and interval surgery (P = 0.25). In the immediate surgery group, patients with microscopic residual disease had better disease-free survival (P = 0.015) and overall survival (P = 0.005) than patients with < 1 cm residual disease. In patients who had interval surgery, those who had microscopic residual disease had more improved disease-free survival than those with < 1 cm disease (P = 0.05), but they did not have more improved overall survival (P = 0.42). Patients with microscopic residual disease who had immediate surgery had a significantly better overall survival rate than those who had interval surgery (P = 0.034). In women with advanced stage ovarian cancer, the goal of surgery should be resection of disease to microscopic residual at the initial procedure. This results in improved overall survival than lesser degrees of resection. Further studies are required to determine optimal surgical management.

  19. Development of a two-stage membrane-based wash-water reclamation subsystem

    NASA Technical Reports Server (NTRS)

    Mccray, S. B.

    1988-01-01

    A two-stage membrane-based subsystem was designed and constructed to enable the recycle of wash waters generated in space. The first stage is a fouling-resistant tube-side-feed hollow-fiber ultrafiltration module, and the second stage is a spiral-wound reverse-osmosis module. Throughout long-term tests, the subsystem consistently produced high-quality permeate, processing actual wash water to 95 percent recovery.

  20. Estimating length of avian incubation and nestling stages in afrotropical forest birds from interval-censored nest records

    USGS Publications Warehouse

    Stanley, T.R.; Newmark, W.D.

    2010-01-01

    In the East Usambara Mountains in northeast Tanzania, research on the effects of forest fragmentation and disturbance on nest survival in understory birds resulted in the accumulation of 1,002 nest records between 2003 and 2008 for 8 poorly studied species. Because information on the length of the incubation and nestling stages in these species is nonexistent or sparse, our objectives in this study were (1) to estimate the length of the incubation and nestling stage and (2) to compute nest survival using these estimates in combination with calculated daily survival probability. Because our data were interval censored, we developed and applied two new statistical methods to estimate stage length. In the 8 species studied, the incubation stage lasted 9.6-21.8 days and the nestling stage 13.9-21.2 days. Combining these results with estimates of daily survival probability, we found that nest survival ranged from 6.0% to 12.5%. We conclude that our methodology for estimating stage lengths from interval-censored nest records is a reasonable and practical approach in the presence of interval-censored data. ?? 2010 The American Ornithologists' Union.

  1. Accelerated Monte Carlo Simulation on the Chemical Stage in Water Radiolysis using GPU

    PubMed Central

    Tian, Zhen; Jiang, Steve B.; Jia, Xun

    2018-01-01

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2. PMID:28323637

  2. Two-stage atlas subset selection in multi-atlas based image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The

  3. THTM: A template matching algorithm based on HOG descriptor and two-stage matching

    NASA Astrophysics Data System (ADS)

    Jiang, Yuanjie; Ruan, Li; Xiao, Limin; Liu, Xi; Yuan, Feng; Wang, Haitao

    2018-04-01

    We propose a novel method for template matching named THTM - a template matching algorithm based on HOG (histogram of gradient) and two-stage matching. We rely on the fast construction of HOG and the two-stage matching that jointly lead to a high accuracy approach for matching. TMTM give enough attention on HOG and creatively propose a twice-stage matching while traditional method only matches once. Our contribution is to apply HOG to template matching successfully and present two-stage matching, which is prominent to improve the matching accuracy based on HOG descriptor. We analyze key features of THTM and perform compared to other commonly used alternatives on a challenging real-world datasets. Experiments show that our method outperforms the comparison method.

  4. Behavioral Assessment of Hearing in 2 to 4 Year-old Children: A Two-interval, Observer-based Procedure Using Conditioned Play-based Responses.

    PubMed

    Bonino, Angela Yarnell; Leibold, Lori J

    2017-01-23

    Collecting reliable behavioral data from toddlers and preschoolers is challenging. As a result, there are significant gaps in our understanding of human auditory development for these age groups. This paper describes an observer-based procedure for measuring hearing sensitivity with a two-interval, two-alternative forced-choice paradigm. Young children are trained to perform a play-based, motor response (e.g., putting a block in a bucket) whenever they hear a target signal. An experimenter observes the child's behavior and makes a judgment about whether the signal was presented during the first or second observation interval; the experimenter is blinded to the true signal interval, so this judgment is based solely on the child's behavior. These procedures were used to test 2 to 4 year-olds (n = 33) with no known hearing problems. The signal was a 1,000 Hz warble tone presented in quiet, and the signal level was adjusted to estimate a threshold corresponding to 71%-correct detection. A valid threshold was obtained for 82% of children. These results indicate that the two-interval procedure is both feasible and reliable for use with toddlers and preschoolers. The two-interval, observer-based procedure described in this paper is a powerful tool for evaluating hearing in young children because it guards against response bias on the part of the experimenter.

  5. Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Jiang, Steve B.; Jia, Xun

    2017-04-01

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.

  6. Accelerated Monte Carlo simulation on the chemical stage in water radiolysis using GPU.

    PubMed

    Tian, Zhen; Jiang, Steve B; Jia, Xun

    2017-04-21

    The accurate simulation of water radiolysis is an important step to understand the mechanisms of radiobiology and quantitatively test some hypotheses regarding radiobiological effects. However, the simulation of water radiolysis is highly time consuming, taking hours or even days to be completed by a conventional CPU processor. This time limitation hinders cell-level simulations for a number of research studies. We recently initiated efforts to develop gMicroMC, a GPU-based fast microscopic MC simulation package for water radiolysis. The first step of this project focused on accelerating the simulation of the chemical stage, the most time consuming stage in the entire water radiolysis process. A GPU-friendly parallelization strategy was designed to address the highly correlated many-body simulation problem caused by the mutual competitive chemical reactions between the radiolytic molecules. Two cases were tested, using a 750 keV electron and a 5 MeV proton incident in pure water, respectively. The time-dependent yields of all the radiolytic species during the chemical stage were used to evaluate the accuracy of the simulation. The relative differences between our simulation and the Geant4-DNA simulation were on average 5.3% and 4.4% for the two cases. Our package, executed on an Nvidia Titan black GPU card, successfully completed the chemical stage simulation of the two cases within 599.2 s and 489.0 s. As compared with Geant4-DNA that was executed on an Intel i7-5500U CPU processor and needed 28.6 h and 26.8 h for the two cases using a single CPU core, our package achieved a speed-up factor of 171.1-197.2.

  7. More accurate, calibrated bootstrap confidence intervals for correlating two autocorrelated climate time series

    NASA Astrophysics Data System (ADS)

    Olafsdottir, Kristin B.; Mudelsee, Manfred

    2013-04-01

    Estimation of the Pearson's correlation coefficient between two time series to evaluate the influences of one time depended variable on another is one of the most often used statistical method in climate sciences. Various methods are used to estimate confidence interval to support the correlation point estimate. Many of them make strong mathematical assumptions regarding distributional shape and serial correlation, which are rarely met. More robust statistical methods are needed to increase the accuracy of the confidence intervals. Bootstrap confidence intervals are estimated in the Fortran 90 program PearsonT (Mudelsee, 2003), where the main intention was to get an accurate confidence interval for correlation coefficient between two time series by taking the serial dependence of the process that generated the data into account. However, Monte Carlo experiments show that the coverage accuracy for smaller data sizes can be improved. Here we adapt the PearsonT program into a new version called PearsonT3, by calibrating the confidence interval to increase the coverage accuracy. Calibration is a bootstrap resampling technique, which basically performs a second bootstrap loop or resamples from the bootstrap resamples. It offers, like the non-calibrated bootstrap confidence intervals, robustness against the data distribution. Pairwise moving block bootstrap is used to preserve the serial correlation of both time series. The calibration is applied to standard error based bootstrap Student's t confidence intervals. The performances of the calibrated confidence intervals are examined with Monte Carlo simulations, and compared with the performances of confidence intervals without calibration, that is, PearsonT. The coverage accuracy is evidently better for the calibrated confidence intervals where the coverage error is acceptably small (i.e., within a few percentage points) already for data sizes as small as 20. One form of climate time series is output from numerical models

  8. Testing independence of bivariate interval-censored data using modified Kendall's tau statistic.

    PubMed

    Kim, Yuneung; Lim, Johan; Park, DoHwan

    2015-11-01

    In this paper, we study a nonparametric procedure to test independence of bivariate interval censored data; for both current status data (case 1 interval-censored data) and case 2 interval-censored data. To do it, we propose a score-based modification of the Kendall's tau statistic for bivariate interval-censored data. Our modification defines the Kendall's tau statistic with expected numbers of concordant and disconcordant pairs of data. The performance of the modified approach is illustrated by simulation studies and application to the AIDS study. We compare our method to alternative approaches such as the two-stage estimation method by Sun et al. (Scandinavian Journal of Statistics, 2006) and the multiple imputation method by Betensky and Finkelstein (Statistics in Medicine, 1999b). © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis†

    PubMed Central

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-01-01

    OBJECTIVES Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. METHODS From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). RESULTS The mean postoperative follow-up period was 12.5 (range: 1–24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. CONCLUSIONS Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis. PMID:23442937

  10. Two-stage unilateral versus one-stage bilateral single-port sympathectomy for palmar and axillary hyperhidrosis.

    PubMed

    Ibrahim, Mohsen; Menna, Cecilia; Andreetti, Claudio; Ciccone, Anna Maria; D'Andrilli, Antonio; Maurizi, Giulio; Poggi, Camilla; Vanni, Camilla; Venuta, Federico; Rendina, Erino Angelo

    2013-06-01

    Video-assisted thoracoscopic sympathectomy is currently the best treatment for palmar and axillary hyperhidrosis. It can be performed through either one or two stages of surgery. This study aimed to evaluate the operative and postoperative results of two-stage unilateral vs one-stage bilateral thoracoscopic sympathectomy. From November 1995 to February 2011, 270 patients with severe palmar and/or axillary hyperhidrosis were recruited for this study. One hundred and thirty patients received one-stage bilateral, single-port video-assisted thoracoscopic sympathectomy (one-stage group) and 140, two-stage unilateral, single-port video-assisted thoracoscopic sympathectomy, with a mean time interval of 4 months between the procedures (two-stage group). The mean postoperative follow-up period was 12.5 (range: 1-24 months). After surgery, hands and axillae of all patients were dry and warm. Sixteen (12%) patients of the one-stage group and 15 (11%) of the two-stage group suffered from mild/moderate pain (P = 0.8482). The mean operative time was 38 ± 5 min in the one-stage group and 39 ± 8 min in the two-stage group (P = 0.199). Pneumothorax occurred in 8 (6%) patients of the one-stage group and in 11 (8%) of the two-stage group. Compensatory sweating occurred in 25 (19%) patients of the one-stage group and in 6 (4%) of the two-stage group (P = 0.0001). No patients developed Horner's syndrome. Both two-stage unilateral and one-stage bilateral single-port video-assisted thoracoscopic sympathectomies are effective, safe and minimally invasive procedures. Two-stage unilateral sympathectomy can be performed with a lower occurrence of compensatory sweating, improving permanently the quality of life in patients with palmar and axillary hyperhidrosis.

  11. Reference interval estimation: Methodological comparison using extensive simulations and empirical data.

    PubMed

    Daly, Caitlin H; Higgins, Victoria; Adeli, Khosrow; Grey, Vijay L; Hamid, Jemila S

    2017-12-01

    To statistically compare and evaluate commonly used methods of estimating reference intervals and to determine which method is best based on characteristics of the distribution of various data sets. Three approaches for estimating reference intervals, i.e. parametric, non-parametric, and robust, were compared with simulated Gaussian and non-Gaussian data. The hierarchy of the performances of each method was examined based on bias and measures of precision. The findings of the simulation study were illustrated through real data sets. In all Gaussian scenarios, the parametric approach provided the least biased and most precise estimates. In non-Gaussian scenarios, no single method provided the least biased and most precise estimates for both limits of a reference interval across all sample sizes, although the non-parametric approach performed the best for most scenarios. The hierarchy of the performances of the three methods was only impacted by sample size and skewness. Differences between reference interval estimates established by the three methods were inflated by variability. Whenever possible, laboratories should attempt to transform data to a Gaussian distribution and use the parametric approach to obtain the most optimal reference intervals. When this is not possible, laboratories should consider sample size and skewness as factors in their choice of reference interval estimation method. The consequences of false positives or false negatives may also serve as factors in this decision. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  12. Coupled simulation of CFD-flight-mechanics with a two-species-gas-model for the hot rocket staging

    NASA Astrophysics Data System (ADS)

    Li, Yi; Reimann, Bodo; Eggers, Thino

    2016-11-01

    The hot rocket staging is to separate the lowest stage by directly ignite the continuing-stage-motor. During the hot staging, the rocket stages move in a harsh dynamic environment. In this work, the hot staging dynamics of a multistage rocket is studied using the coupled simulation of Computational Fluid Dynamics and Flight Mechanics. Plume modeling is crucial for a coupled simulation with high fidelity. A 2-species-gas model is proposed to simulate the flow system of the rocket during the staging: the free-stream is modeled as "cold air" and the exhausted plume from the continuing-stage-motor is modeled with an equivalent calorically-perfect-gas that approximates the properties of the plume at the nozzle exit. This gas model can well comprise between the computation accuracy and efficiency. In the coupled simulations, the Navier-Stokes equations are time-accurately solved in moving system, with which the Flight Mechanics equations can be fully coupled. The Chimera mesh technique is utilized to deal with the relative motions of the separated stages. A few representative staging cases with different initial flight conditions of the rocket are studied with the coupled simulation. The torque led by the plume-induced-flow-separation at the aft-wall of the continuing-stage is captured during the staging, which can assist the design of the controller of the rocket. With the increasing of the initial angle-of-attack of the rocket, the staging quality becomes evidently poorer, but the separated stages are generally stable when the initial angle-of-attack of the rocket is small.

  13. Two-stage implant systems.

    PubMed

    Fritz, M E

    1999-06-01

    Since the advent of osseointegration approximately 20 years ago, there has been a great deal of scientific data developed on two-stage integrated implant systems. Although these implants were originally designed primarily for fixed prostheses in the mandibular arch, they have been used in partially dentate patients, in patients needing overdentures, and in single-tooth restorations. In addition, this implant system has been placed in extraction sites, in bone-grafted areas, and in maxillary sinus elevations. Often, the documentation of these procedures has lagged. In addition, most of the reports use survival criteria to describe results, often providing overly optimistic data. It can be said that the literature describes a true adhesion of the epithelium to the implant similar to adhesion to teeth, that two-stage implants appear to have direct contact somewhere between 50% and 70% of the implant surface, that the microbial flora of the two-stage implant system closely resembles that of the natural tooth, and that the microbiology of periodontitis appears to be closely related to peri-implantitis. In evaluations of the data from implant placement in all of the above-noted situations by means of meta-analysis, it appears that there is a strong case that two-stage dental implants are successful, usually showing a confidence interval of over 90%. It also appears that the mandibular implants are more successful than maxillary implants. Studies also show that overdenture therapy is valid, and that single-tooth implants and implants placed in partially dentate mouths have a success rate that is quite good, although not quite as high as in the fully edentulous dentition. It would also appear that the potential causes of failure in the two-stage dental implant systems are peri-implantitis, placement of implants in poor-quality bone, and improper loading of implants. There are now data addressing modifications of the implant surface to alter the percentage of

  14. Compact high-flux two-stage solar collectors based on tailored edge-ray concentrators

    NASA Astrophysics Data System (ADS)

    Friedman, Robert P.; Gordon, Jeffrey M.; Ries, Harald

    1995-08-01

    Using the recently-invented tailored edge-ray concentrator (TERC) approach for the design of compact two-stage high-flux solar collectors--a focusing primary reflector and a nonimaging TERC secondary reflector--we present: 1) a new primary reflector shape based on the TERC approach and a secondary TERC tailored to its particular flux map, such that more compact concentrators emerge at flux concentration levels in excess of 90% of the thermodynamic limit; and 2) calculations and raytrace simulations result which demonstrate the V-cone approximations to a wide variety of TERCs attain the concentration of the TERC to within a few percent, and hence represent practical secondary concentrators that may be superior to corresponding compound parabolic concentrator or trumpet secondaries.

  15. Two-stage high frequency pulse tube refrigerator with base temperature below 10 K

    NASA Astrophysics Data System (ADS)

    Chen, Liubiao; Wu, Xianlin; Liu, Sixue; Zhu, Xiaoshuang; Pan, Changzhao; Guo, Jia; Zhou, Yuan; Wang, Junjie

    2017-12-01

    This paper introduces our recent experimental results of pulse tube refrigerator driven by linear compressor. The working frequency is 23-30 Hz, which is much higher than the G-M type cooler (the developed cryocooler will be called high frequency pulse tube refrigerator in this paper). To achieve a temperature below 10 K, two types of two-stage configuration, gas coupled and thermal coupled, have been designed, built and tested. At present, both types can achieve a no-load temperature below 10 K by using only one compressor. As to gas-coupled HPTR, the second stage can achieve a cooling power of 16 mW/10K when the first stage applied a 400 mW heat load at 60 K with a total input power of 400 W. As to thermal-coupled HPTR, the designed cooling power of the first stage is 10W/80K, and then the temperature of the second stage can get a temperature below 10 K with a total input power of 300 W. In the current preliminary experiment, liquid nitrogen is used to replace the first coaxial configuration as the precooling stage, and a no-load temperature 9.6 K can be achieved with a stainless steel mesh regenerator. Using Er3Ni sphere with a diameter about 50-60 micron, the simulation results show it is possible to achieve a temperature below 8 K. The configuration, the phase shifters and the regenerative materials of the developed two types of two-stage high frequency pulse tube refrigerator will be discussed, and some typical experimental results and considerations for achieving a better performance will also be presented in this paper.

  16. Two-stage damage diagnosis based on the distance between ARMA models and pre-whitening filters

    NASA Astrophysics Data System (ADS)

    Zheng, H.; Mita, A.

    2007-10-01

    This paper presents a two-stage damage diagnosis strategy for damage detection and localization. Auto-regressive moving-average (ARMA) models are fitted to time series of vibration signals recorded by sensors. In the first stage, a novel damage indicator, which is defined as the distance between ARMA models, is applied to damage detection. This stage can determine the existence of damage in the structure. Such an algorithm uses output only and does not require operator intervention. Therefore it can be embedded in the sensor board of a monitoring network. In the second stage, a pre-whitening filter is used to minimize the cross-correlation of multiple excitations. With this technique, the damage indicator can further identify the damage location and severity when the damage has been detected in the first stage. The proposed methodology is tested using simulation and experimental data. The analysis results clearly illustrate the feasibility of the proposed two-stage damage diagnosis methodology.

  17. Theoretical and experimental investigations on the cooling capacity distributions at the stages in the thermally-coupled two-stage Stirling-type pulse tube cryocooler without external precooling

    NASA Astrophysics Data System (ADS)

    Tan, Jun; Dang, Haizheng

    2017-03-01

    The two-stage Stirling-type pulse tube cryocooler (SPTC) has advantages in simultaneously providing the cooling powers at two different temperatures, and the capacity in distributing these cooling capacities between the stages is significant to its practical applications. In this paper, a theoretical model of the thermally-coupled two-stage SPTC without external precooling is established based on the electric circuit analogy with considering real gas effects, and the simulations of both the cooling performances and PV power distribution between stages are conducted. The results indicate that the PV power is inversely proportional to the acoustic impedance of each stage, and the cooling capacity distribution is determined by the cold finger cooling efficiency and the PV power into each stage together. The design methods of the cold fingers to achieve both the desired PV power and the cooling capacity distribution between the stages are summarized. The two-stage SPTC is developed and tested based on the above theoretical investigations, and the experimental results show that it can simultaneously achieve 0.69 W at 30 K and 3.1 W at 85 K with an electric input power of 330 W and a reject temperature of 300 K. The consistency between the simulated and the experimental results is observed and the theoretical investigations are experimentally verified.

  18. SLS Core Stage Simulator

    NASA Image and Video Library

    2015-02-02

    CHRISTOPHER CRUMBLY, MANAGER OF THE SPACECRAFT PAYLOAD INTEGRATION AND EVOLUTION OFFICE, GAVE VISITORS AN INSIDER'S PERSPECTIVE ON THE CORE STAGE SIMULATOR AT MARSHALL AND ITS IMPORTANCE TO DEVELOPMENT OF THE SPACE LAUNCH SYSTEM. CHRISTOPHER CRUMBLY, MANAGER OF THE SPACECRAFT PAYLOAD INTEGRATION AND EVOLUTION OFFICE, GAVE VISITORS AN INSIDER'S PERSPECTIVE ON THE CORE STAGE SIMULATOR AT MARSHALL AND ITS IMPORTANCE TO DEVELOPMENT OF THE SPACE LAUNCH SYSTEM.

  19. A two-stage flow-based intrusion detection model for next-generation networks.

    PubMed

    Umer, Muhammad Fahad; Sher, Muhammad; Bi, Yaxin

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results.

  20. A two-stage flow-based intrusion detection model for next-generation networks

    PubMed Central

    2018-01-01

    The next-generation network provides state-of-the-art access-independent services over converged mobile and fixed networks. Security in the converged network environment is a major challenge. Traditional packet and protocol-based intrusion detection techniques cannot be used in next-generation networks due to slow throughput, low accuracy and their inability to inspect encrypted payload. An alternative solution for protection of next-generation networks is to use network flow records for detection of malicious activity in the network traffic. The network flow records are independent of access networks and user applications. In this paper, we propose a two-stage flow-based intrusion detection system for next-generation networks. The first stage uses an enhanced unsupervised one-class support vector machine which separates malicious flows from normal network traffic. The second stage uses a self-organizing map which automatically groups malicious flows into different alert clusters. We validated the proposed approach on two flow-based datasets and obtained promising results. PMID:29329294

  1. Meta-analysis of Gaussian individual patient data: Two-stage or not two-stage?

    PubMed

    Morris, Tim P; Fisher, David J; Kenward, Michael G; Carpenter, James R

    2018-04-30

    Quantitative evidence synthesis through meta-analysis is central to evidence-based medicine. For well-documented reasons, the meta-analysis of individual patient data is held in higher regard than aggregate data. With access to individual patient data, the analysis is not restricted to a "two-stage" approach (combining estimates and standard errors) but can estimate parameters of interest by fitting a single model to all of the data, a so-called "one-stage" analysis. There has been debate about the merits of one- and two-stage analysis. Arguments for one-stage analysis have typically noted that a wider range of models can be fitted and overall estimates may be more precise. The two-stage side has emphasised that the models that can be fitted in two stages are sufficient to answer the relevant questions, with less scope for mistakes because there are fewer modelling choices to be made in the two-stage approach. For Gaussian data, we consider the statistical arguments for flexibility and precision in small-sample settings. Regarding flexibility, several of the models that can be fitted only in one stage may not be of serious interest to most meta-analysis practitioners. Regarding precision, we consider fixed- and random-effects meta-analysis and see that, for a model making certain assumptions, the number of stages used to fit this model is irrelevant; the precision will be approximately equal. Meta-analysts should choose modelling assumptions carefully. Sometimes relevant models can only be fitted in one stage. Otherwise, meta-analysts are free to use whichever procedure is most convenient to fit the identified model. © 2018 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  2. Communication interval selection in distributed heterogeneous simulation of large-scale dynamical systems

    NASA Astrophysics Data System (ADS)

    Lucas, Charles E.; Walters, Eric A.; Jatskevich, Juri; Wasynczuk, Oleg; Lamm, Peter T.

    2003-09-01

    In this paper, a new technique useful for the numerical simulation of large-scale systems is presented. This approach enables the overall system simulation to be formed by the dynamic interconnection of the various interdependent simulations, each representing a specific component or subsystem such as control, electrical, mechanical, hydraulic, or thermal. Each simulation may be developed separately using possibly different commercial-off-the-shelf simulation programs thereby allowing the most suitable language or tool to be used based on the design/analysis needs. These subsystems communicate the required interface variables at specific time intervals. A discussion concerning the selection of appropriate communication intervals is presented herein. For the purpose of demonstration, this technique is applied to a detailed simulation of a representative aircraft power system, such as that found on the Joint Strike Fighter (JSF). This system is comprised of ten component models each developed using MATLAB/Simulink, EASY5, or ACSL. When the ten component simulations were distributed across just four personal computers (PCs), a greater than 15-fold improvement in simulation speed (compared to the single-computer implementation) was achieved.

  3. A 45 ps time digitizer with a two-phase clock and dual-edge two-stage interpolation in a field programmable gate array device

    NASA Astrophysics Data System (ADS)

    Szplet, R.; Kalisz, J.; Jachna, Z.

    2009-02-01

    We present a time digitizer having 45 ps resolution, integrated in a field programmable gate array (FPGA) device. The time interval measurement is based on the two-stage interpolation method. A dual-edge two-phase interpolator is driven by the on-chip synthesized 250 MHz clock with precise phase adjustment. An improved dual-edge double synchronizer was developed to control the main counter. The nonlinearity of the digitizer's transfer characteristic is identified and utilized by the dedicated hardware code processor for the on-the-fly correction of the output data. Application of presented ideas has resulted in the measurement uncertainty of the digitizer below 70 ps RMS over the time interval ranging from 0 to 1 s. The use of the two-stage interpolation and a fast FIFO memory has allowed us to obtain the maximum measurement rate of five million measurements per second.

  4. Numerical simulation analysis of four-stage mutation of solid-liquid two-phase grinding

    NASA Astrophysics Data System (ADS)

    Li, Junye; Liu, Yang; Hou, Jikun; Hu, Jinglei; Zhang, Hengfu; Wu, Guiling

    2018-03-01

    In order to explore the numerical simulation of solid-liquid two-phase abrasive grain polishing and abrupt change tube, in this paper, the fourth order abrupt change tube was selected as the research object, using the fluid mechanics software to simulate,based on the theory of solid-liquid two-phase flow dynamics, study on the mechanism of AFM micromachining a workpiece during polishing.Analysis at different inlet pressures, the dynamic pressure distribution pipe mutant fourth order abrasive flow field, turbulence intensity, discuss the influence of the inlet pressure of different abrasive flow polishing effect.

  5. Empirical likelihood-based confidence intervals for mean medical cost with censored data.

    PubMed

    Jeyarajah, Jenny; Qin, Gengsheng

    2017-11-10

    In this paper, we propose empirical likelihood methods based on influence function and jackknife techniques for constructing confidence intervals for mean medical cost with censored data. We conduct a simulation study to compare the coverage probabilities and interval lengths of our proposed confidence intervals with that of the existing normal approximation-based confidence intervals and bootstrap confidence intervals. The proposed methods have better finite-sample performances than existing methods. Finally, we illustrate our proposed methods with a relevant example. Copyright © 2017 John Wiley & Sons, Ltd.

  6. Two-Stage Bayesian Model Averaging in Endogenous Variable Models*

    PubMed Central

    Lenkoski, Alex; Eicher, Theo S.; Raftery, Adrian E.

    2013-01-01

    Economic modeling in the presence of endogeneity is subject to model uncertainty at both the instrument and covariate level. We propose a Two-Stage Bayesian Model Averaging (2SBMA) methodology that extends the Two-Stage Least Squares (2SLS) estimator. By constructing a Two-Stage Unit Information Prior in the endogenous variable model, we are able to efficiently combine established methods for addressing model uncertainty in regression models with the classic technique of 2SLS. To assess the validity of instruments in the 2SBMA context, we develop Bayesian tests of the identification restriction that are based on model averaged posterior predictive p-values. A simulation study showed that 2SBMA has the ability to recover structure in both the instrument and covariate set, and substantially improves the sharpness of resulting coefficient estimates in comparison to 2SLS using the full specification in an automatic fashion. Due to the increased parsimony of the 2SBMA estimate, the Bayesian Sargan test had a power of 50 percent in detecting a violation of the exogeneity assumption, while the method based on 2SLS using the full specification had negligible power. We apply our approach to the problem of development accounting, and find support not only for institutions, but also for geography and integration as development determinants, once both model uncertainty and endogeneity have been jointly addressed. PMID:24223471

  7. Event- and interval-based measurement of stuttering: a review.

    PubMed

    Valente, Ana Rita S; Jesus, Luis M T; Hall, Andreia; Leahy, Margaret

    2015-01-01

    Event- and interval-based measurements are two different ways of computing frequency of stuttering. Interval-based methodology emerged as an alternative measure to overcome problems associated with reproducibility in the event-based methodology. No review has been made to study the effect of methodological factors in interval-based absolute reliability data or to compute the agreement between the two methodologies in terms of inter-judge, intra-judge and accuracy (i.e., correspondence between raters' scores and an established criterion). To provide a review related to reproducibility of event-based and time-interval measurement, and to verify the effect of methodological factors (training, experience, interval duration, sample presentation order and judgment conditions) on agreement of time-interval measurement; in addition, to determine if it is possible to quantify the agreement between the two methodologies The first two authors searched for articles on ERIC, MEDLINE, PubMed, B-on, CENTRAL and Dissertation Abstracts during January-February 2013 and retrieved 495 articles. Forty-eight articles were selected for review. Content tables were constructed with the main findings. Articles related to event-based measurements revealed values of inter- and intra-judge greater than 0.70 and agreement percentages beyond 80%. The articles related to time-interval measures revealed that, in general, judges with more experience with stuttering presented significantly higher levels of intra- and inter-judge agreement. Inter- and intra-judge values were beyond the references for high reproducibility values for both methodologies. Accuracy (regarding the closeness of raters' judgements with an established criterion), intra- and inter-judge agreement were higher for trained groups when compared with non-trained groups. Sample presentation order and audio/video conditions did not result in differences in inter- or intra-judge results. A duration of 5 s for an interval appears to be

  8. Feasibility Study of Laboratory Simulation of Single-Stage-to-Orbit Vehicle Base Heating

    NASA Technical Reports Server (NTRS)

    Park, Chung Sik; Sharma, Surendra; Edwards, Thomas A. (Technical Monitor)

    1995-01-01

    The feasibility of simulating in a laboratory the heating environment of the base region of the proposed reusable single-stage-to-orbit vehicle during its ascent is examined. The propellant is assumed to consist of hydrocarbon (RP1), liquid hydrogen (LH2), and liquid oxygen (LO2), which produces CO and H2 as the main combustible components of the exhaust effluent. Since afterburning in the recirculating region can dictate the temperature of the base flowfield and ensuing heating phenomena, laboratory simulation focuses on the thermochemistry of the afterburning. By extrapolating the Saturn V flight data, the Damkohler number, in the base region with afterburning for SSTO vehicle, is estimated to be between 30 and 140. It is shown that a flow with a Damkohler number of 1.8 to 25 can be produced in an impulse ground test facility. Even with such a reduced Damkohler number, the experiment can adequately reproduce the main features of the flight environment.

  9. Selection of the initial design for the two-stage continual reassessment method.

    PubMed

    Jia, Xiaoyu; Ivanova, Anastasia; Lee, Shing M

    2017-01-01

    In the two-stage continual reassessment method (CRM), model-based dose escalation is preceded by a pre-specified escalating sequence starting from the lowest dose level. This is appealing to clinicians because it allows a sufficient number of patients to be assigned to each of the lower dose levels before escalating to higher dose levels. While a theoretical framework to build the two-stage CRM has been proposed, the selection of the initial dose-escalating sequence, generally referred to as the initial design, remains arbitrary, either by specifying cohorts of three patients or by trial and error through extensive simulations. Motivated by a currently ongoing oncology dose-finding study for which clinicians explicitly stated their desire to assign at least one patient to each of the lower dose levels, we proposed a systematic approach for selecting the initial design for the two-stage CRM. The initial design obtained using the proposed algorithm yields better operating characteristics compared to using a cohort of three initial design with a calibrated CRM. The proposed algorithm simplifies the selection of initial design for the two-stage CRM. Moreover, initial designs to be used as reference for planning a two-stage CRM are provided.

  10. Follow-up of early stage melanoma: specialist clinician perspectives on the functions of follow-up and implications for extending follow-up intervals.

    PubMed

    Rychetnik, Lucie; McCaffery, Kirsten; Morton, Rachael L; Thompson, John F; Menzies, Scott W; Irwig, Les

    2013-04-01

    There is limited evidence on the relative effectiveness of different follow-up schedules for patients with AJCC stage I or II melanoma, but less frequent follow-up than is currently recommended has been proposed. To describe melanoma clinicians' perspectives on the functions of follow-up, factors that influence follow-up intervals, and important considerations for extending intervals. Qualitative interviews with 16 clinicians (surgical oncologists, dermatologists, melanoma unit physicians) who conduct follow-up at two of Australia's largest specialist centers. Follow-up is conducted for early detection of recurrences or new primary melanomas, to manage patient anxiety, support patient self-care, and as part of shared care. Recommended intervals are based on guidelines but account for each patient's clinical risk profile, level of anxiety, patient education requirements, capacity to engage in skin self-examination, and how the clinician prefers to manage any suspicious lesions. To revise guidelines and implement change it is important to understand the rationale underpinning existing practice. Extended follow-up intervals for early stage melanoma are more likely to be adopted after the first year when patients are less anxious and sufficiently prepared to conduct self-examination. Clinicians may retain existing schedules for highly anxious patients or those unable to examine themselves. Copyright © 2012 Wiley Periodicals, Inc.

  11. A Two-Stage Composition Method for Danger-Aware Services Based on Context Similarity

    NASA Astrophysics Data System (ADS)

    Wang, Junbo; Cheng, Zixue; Jing, Lei; Ota, Kaoru; Kansen, Mizuo

    Context-aware systems detect user's physical and social contexts based on sensor networks, and provide services that adapt to the user accordingly. Representing, detecting, and managing the contexts are important issues in context-aware systems. Composition of contexts is a useful method for these works, since it can detect a context by automatically composing small pieces of information to discover service. Danger-aware services are a kind of context-aware services which need description of relations between a user and his/her surrounding objects and between users. However when applying the existing composition methods to danger-aware services, they show the following shortcomings that (1) they have not provided an explicit method for representing composition of multi-user' contexts, (2) there is no flexible reasoning mechanism based on similarity of contexts, so that they can just provide services exactly following the predefined context reasoning rules. Therefore, in this paper, we propose a two-stage composition method based on context similarity to solve the above problems. The first stage is composition of the useful information to represent the context for a single user. The second stage is composition of multi-users' contexts to provide services by considering the relation of users. Finally the danger degree of the detected context is computed by using context similarity between the detected context and the predefined context. Context is dynamically represented based on two-stage composition rules and a Situation theory based Ontology, which combines the advantages of Ontology and Situation theory. We implement the system in an indoor ubiquitous environment, and evaluate the system through two experiments with the support of subjects. The experiment results show the method is effective, and the accuracy of danger detection is acceptable to a danger-aware system.

  12. Pulsations Induced by Vibrations in Aircraft Engine Two-Stage Pump

    NASA Astrophysics Data System (ADS)

    Gafurov, S. A.; Salmina, V. A.; Handroos, H.

    2018-01-01

    This paper describes a phenomenon of induced pressure pulsations inside a two-stage aircraft engine pump. A considered pumps consists of a screw-centrifugal and gear stages. The paper describes the cause of two-stage pump elements loading. A number of hypothesis of pressure pulsations generation inside a pump were considered. The main focus in this consideration is made on phenomena that are not related to pump mode of operation. Provided analysis has shown that pump vibrations as well as pump elements self-oscillations are the main causes that lead to trailing vortices generation. Analysis was conducted by means FEM and CFD simulations as well by means of experimental investigations to obtain natural frequencies and flow structure inside a screw-centrifugal stage. To perform accurate simulations adequate boundary conditions were considered. Cavitation and turbulence phenomena have been also taken into account. Obtained results have shown generated trailing vortices lead to high-frequency loading of the impeller of screw-centrifugal stage and can be a cause of the bearing damage.

  13. Observer-Pattern Modeling and Slow-Scale Bifurcation Analysis of Two-Stage Boost Inverters

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Wan, Xiaojin; Li, Weijie; Ding, Honghui; Yi, Chuanzhi

    2017-06-01

    This paper deals with modeling and bifurcation analysis of two-stage Boost inverters. Since the effect of the nonlinear interactions between source-stage converter and load-stage inverter causes the “hidden” second-harmonic current at the input of the downstream H-bridge inverter, an observer-pattern modeling method is proposed by removing time variance originating from both fundamental frequency and hidden second harmonics in the derived averaged equations. Based on the proposed observer-pattern model, the underlying mechanism of slow-scale instability behavior is uncovered with the help of eigenvalue analysis method. Then eigenvalue sensitivity analysis is used to select some key system parameters of two-stage Boost inverter, and some behavior boundaries are given to provide some design-oriented information for optimizing the circuit. Finally, these theoretical results are verified by numerical simulations and circuit experiment.

  14. Mosquito population dynamics from cellular automata-based simulation

    NASA Astrophysics Data System (ADS)

    Syafarina, Inna; Sadikin, Rifki; Nuraini, Nuning

    2016-02-01

    In this paper we present an innovative model for simulating mosquito-vector population dynamics. The simulation consist of two stages: demography and dispersal dynamics. For demography simulation, we follow the existing model for modeling a mosquito life cycles. Moreover, we use cellular automata-based model for simulating dispersal of the vector. In simulation, each individual vector is able to move to other grid based on a random walk. Our model is also capable to represent immunity factor for each grid. We simulate the model to evaluate its correctness. Based on the simulations, we can conclude that our model is correct. However, our model need to be improved to find a realistic parameters to match real data.

  15. Two-stage light-gas magnetoplasma accelerator for hypervelocity impact simulation

    NASA Astrophysics Data System (ADS)

    Khramtsov, P. P.; Vasetskij, V. A.; Makhnach, A. I.; Grishenko, V. M.; Chernik, M. Yu; Shikh, I. A.; Doroshko, M. V.

    2016-11-01

    The development of macroparticles acceleration methods for high-speed impact simulation in a laboratory is an actual problem due to increasing of space flights duration and necessity of providing adequate spacecraft protection against micrometeoroid and space debris impacts. This paper presents results of experimental study of a two-stage light- gas magnetoplasma launcher for acceleration of a macroparticle, in which a coaxial plasma accelerator creates a shock wave in a high-pressure channel filled with light gas. Graphite and steel spheres with diameter of 2.5-4 mm were used as a projectile and were accelerated to the speed of 0.8-4.8 km/s. A launching of particle occurred in vacuum. For projectile velocity control the speed measuring method was developed. The error of this metod does not exceed 5%. The process of projectile flight from the barrel and the process of a particle collision with a target were registered by use of high-speed camera. The results of projectile collision with elements of meteoroid shielding are presented. In order to increase the projectile velocity, the high-pressure channel should be filled with hydrogen. However, we used helium in our experiments for safety reasons. Therefore, we can expect that the range of mass and velocity of the accelerated particles can be extended by use of hydrogen as an accelerating gas.

  16. Hybrid neuro-heuristic methodology for simulation and control of dynamic systems over time interval.

    PubMed

    Woźniak, Marcin; Połap, Dawid

    2017-09-01

    Simulation and positioning are very important aspects of computer aided engineering. To process these two, we can apply traditional methods or intelligent techniques. The difference between them is in the way they process information. In the first case, to simulate an object in a particular state of action, we need to perform an entire process to read values of parameters. It is not very convenient for objects for which simulation takes a long time, i.e. when mathematical calculations are complicated. In the second case, an intelligent solution can efficiently help on devoted way of simulation, which enables us to simulate the object only in a situation that is necessary for a development process. We would like to present research results on developed intelligent simulation and control model of electric drive engine vehicle. For a dedicated simulation method based on intelligent computation, where evolutionary strategy is simulating the states of the dynamic model, an intelligent system based on devoted neural network is introduced to control co-working modules while motion is in time interval. Presented experimental results show implemented solution in situation when a vehicle transports things over area with many obstacles, what provokes sudden changes in stability that may lead to destruction of load. Therefore, applied neural network controller prevents the load from destruction by positioning characteristics like pressure, acceleration, and stiffness voltage to absorb the adverse changes of the ground. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. An inexact mixed risk-aversion two-stage stochastic programming model for water resources management under uncertainty.

    PubMed

    Li, W; Wang, B; Xie, Y L; Huang, G H; Liu, L

    2015-02-01

    Uncertainties exist in the water resources system, while traditional two-stage stochastic programming is risk-neutral and compares the random variables (e.g., total benefit) to identify the best decisions. To deal with the risk issues, a risk-aversion inexact two-stage stochastic programming model is developed for water resources management under uncertainty. The model was a hybrid methodology of interval-parameter programming, conditional value-at-risk measure, and a general two-stage stochastic programming framework. The method extends on the traditional two-stage stochastic programming method by enabling uncertainties presented as probability density functions and discrete intervals to be effectively incorporated within the optimization framework. It could not only provide information on the benefits of the allocation plan to the decision makers but also measure the extreme expected loss on the second-stage penalty cost. The developed model was applied to a hypothetical case of water resources management. Results showed that that could help managers generate feasible and balanced risk-aversion allocation plans, and analyze the trade-offs between system stability and economy.

  18. Long-term outcome of cochlear implant in patients with chronic otitis media: one-stage surgery is equivalent to two-stage surgery.

    PubMed

    Jang, Jeong Hun; Park, Min-Hyun; Song, Jae-Jin; Lee, Jun Ho; Oh, Seung Ha; Kim, Chong-Sun; Chang, Sun O

    2015-01-01

    This study compared long-term speech performance after cochlear implantation (CI) between surgical strategies in patients with chronic otitis media (COM). Thirty patients with available open-set sentence scores measured more than 2 yr postoperatively were included: 17 who received one-stage surgeries (One-stage group), and the other 13 underwent two-stage surgeries (Two-stage group). Preoperative inflammatory status, intraoperative procedures, postoperative outcomes were compared. Among 17 patients in One-stage group, 12 underwent CI accompanied with the eradication of inflammation; CI without eradicating inflammation was performed on 3 patients; 2 underwent CIs via the transcanal approach. Thirteen patients in Two-stage group received the complete eradication of inflammation as first-stage surgery, and CI was performed as second-stage surgery after a mean interval of 8.2 months. Additional control of inflammation was performed in 2 patients at second-stage surgery for cavity problem and cholesteatoma, respectively. There were 2 cases of electrode exposure as postoperative complication in the two-stage group; new electrode arrays were inserted and covered by local flaps. The open-set sentence scores of Two-stage group were not significantly higher than those of One-stage group at 1, 2, 3, and 5 yr postoperatively. Postoperative long-term speech performance is equivalent when either of two surgical strategies is used to treat appropriately selected candidates.

  19. Two-stage vs single-stage management for concomitant gallstones and common bile duct stones

    PubMed Central

    Lu, Jiong; Cheng, Yao; Xiong, Xian-Ze; Lin, Yi-Xin; Wu, Si-Jia; Cheng, Nan-Sheng

    2012-01-01

    AIM: To evaluate the safety and effectiveness of two-stage vs single-stage management for concomitant gallstones and common bile duct stones. METHODS: Four databases, including PubMed, Embase, the Cochrane Central Register of Controlled Trials and the Science Citation Index up to September 2011, were searched to identify all randomized controlled trials (RCTs). Data were extracted from the studies by two independent reviewers. The primary outcomes were stone clearance from the common bile duct, postoperative morbidity and mortality. The secondary outcomes were conversion to other procedures, number of procedures per patient, length of hospital stay, total operative time, hospitalization charges, patient acceptance and quality of life scores. RESULTS: Seven eligible RCTs [five trials (n = 621) comparing preoperative endoscopic retrograde cholangiopancreatography (ERCP)/endoscopic sphincterotomy (EST) + laparoscopic cholecystectomy (LC) with LC + laparoscopic common bile duct exploration (LCBDE); two trials (n = 166) comparing postoperative ERCP/EST + LC with LC + LCBDE], composed of 787 patients in total, were included in the final analysis. The meta-analysis detected no statistically significant difference between the two groups in stone clearance from the common bile duct [risk ratios (RR) = -0.10, 95% confidence intervals (CI): -0.24 to 0.04, P = 0.17], postoperative morbidity (RR = 0.79, 95% CI: 0.58 to 1.10, P = 0.16), mortality (RR = 2.19, 95% CI: 0.33 to 14.67, P = 0.42), conversion to other procedures (RR = 1.21, 95% CI: 0.54 to 2.70, P = 0.39), length of hospital stay (MD = 0.99, 95% CI: -1.59 to 3.57, P = 0.45), total operative time (MD = 12.14, 95% CI: -1.83 to 26.10, P = 0.09). Two-stage (LC + ERCP/EST) management clearly required more procedures per patient than single-stage (LC + LCBDE) management. CONCLUSION: Single-stage management is equivalent to two-stage management but requires fewer procedures. However, patient’s condition, operator

  20. Interval MULTIMOORA method with target values of attributes based on interval distance and preference degree: biomaterials selection

    NASA Astrophysics Data System (ADS)

    Hafezalkotob, Arian; Hafezalkotob, Ashkan

    2017-06-01

    A target-based MADM method covers beneficial and non-beneficial attributes besides target values for some attributes. Such techniques are considered as the comprehensive forms of MADM approaches. Target-based MADM methods can also be used in traditional decision-making problems in which beneficial and non-beneficial attributes only exist. In many practical selection problems, some attributes have given target values. The values of decision matrix and target-based attributes can be provided as intervals in some of such problems. Some target-based decision-making methods have recently been developed; however, a research gap exists in the area of MADM techniques with target-based attributes under uncertainty of information. We extend the MULTIMOORA method for solving practical material selection problems in which material properties and their target values are given as interval numbers. We employ various concepts of interval computations to reduce degeneration of uncertain data. In this regard, we use interval arithmetic and introduce innovative formula for interval distance of interval numbers to create interval target-based normalization technique. Furthermore, we use a pairwise preference matrix based on the concept of degree of preference of interval numbers to calculate the maximum, minimum, and ranking of these numbers. Two decision-making problems regarding biomaterials selection of hip and knee prostheses are discussed. Preference degree-based ranking lists for subordinate parts of the extended MULTIMOORA method are generated by calculating the relative degrees of preference for the arranged assessment values of the biomaterials. The resultant rankings for the problem are compared with the outcomes of other target-based models in the literature.

  1. Survival outcomes after radiation therapy for stage III non-small-cell lung cancer after adoption of computed tomography-based simulation.

    PubMed

    Chen, Aileen B; Neville, Bridget A; Sher, David J; Chen, Kun; Schrag, Deborah

    2011-06-10

    Technical studies suggest that computed tomography (CT) -based simulation improves the therapeutic ratio for thoracic radiation therapy (TRT), although few studies have evaluated its use or impact on outcomes. We used the Surveillance, Epidemiology and End Results (SEER) -Medicare linked data to identify CT-based simulation for TRT among Medicare beneficiaries diagnosed with stage III non-small-cell lung cancer (NSCLC) between 2000 and 2005. Demographic and clinical factors associated with use of CT simulation were identified, and the impact of CT simulation on survival was analyzed by using Cox models and propensity score analysis. The proportion of patients treated with TRT who had CT simulation increased from 2.4% in 1994 to 34.0% in 2000 to 77.6% in 2005. Of the 5,540 patients treated with TRT from 2000 to 2005, 60.1% had CT simulation. Geographic variation was seen in rates of CT simulation, with lower rates in rural areas and in the South and West compared with those in the Northeast and Midwest. Patients treated with chemotherapy were more likely to have CT simulation (65.2% v 51.2%; adjusted odds ratio, 1.67; 95% CI, 1.48 to 1.88; P < .01), although there was no significant association between use of surgery and CT simulation. Controlling for demographic and clinical characteristics, CT simulation was associated with lower risk of death (adjusted hazard ratio, 0.77; 95% CI, 0.73 to 0.82; P < .01) compared with conventional simulation. CT-based simulation has been widely, although not uniformly, adopted for the treatment of stage III NSCLC and is associated with higher survival among patients receiving TRT.

  2. Estimating accuracy of land-cover composition from two-stage cluster sampling

    USGS Publications Warehouse

    Stehman, S.V.; Wickham, J.D.; Fattorini, L.; Wade, T.D.; Baffetta, F.; Smith, J.H.

    2009-01-01

    Land-cover maps are often used to compute land-cover composition (i.e., the proportion or percent of area covered by each class), for each unit in a spatial partition of the region mapped. We derive design-based estimators of mean deviation (MD), mean absolute deviation (MAD), root mean square error (RMSE), and correlation (CORR) to quantify accuracy of land-cover composition for a general two-stage cluster sampling design, and for the special case of simple random sampling without replacement (SRSWOR) at each stage. The bias of the estimators for the two-stage SRSWOR design is evaluated via a simulation study. The estimators of RMSE and CORR have small bias except when sample size is small and the land-cover class is rare. The estimator of MAD is biased for both rare and common land-cover classes except when sample size is large. A general recommendation is that rare land-cover classes require large sample sizes to ensure that the accuracy estimators have small bias. ?? 2009 Elsevier Inc.

  3. Interval cancers in a population-based screening program for colorectal cancer in catalonia, Spain.

    PubMed

    Garcia, M; Domènech, X; Vidal, C; Torné, E; Milà, N; Binefa, G; Benito, L; Moreno, V

    2015-01-01

    Objective. To analyze interval cancers among participants in a screening program for colorectal cancer (CRC) during four screening rounds. Methods. The study population consisted of participants of a fecal occult blood test-based screening program from February 2000 to September 2010, with a 30-month follow-up (n = 30,480). We used hospital administration data to identify CRC. An interval cancer was defined as an invasive cancer diagnosed within 30 months of a negative screening result and before the next recommended examination. Gender, age, stage, and site distribution of interval cancers were compared with those in the screen-detected group. Results. Within the study period, 97 tumors were screen-detected and 74 tumors were diagnosed after a negative screening. In addition, 17 CRC (18.3%) were found after an inconclusive result and 2 cases were diagnosed within the surveillance interval (2.1%). There was an increase of interval cancers over the four rounds (from 32.4% to 46.0%). When compared with screen-detected cancers, interval cancers were found predominantly in the rectum (OR: 3.66; 95% CI: 1.51-8.88) and at more advanced stages (P = 0.025). Conclusion. There are large numbers of cancer that are not detected through fecal occult blood test-based screening. The low sensitivity should be emphasized to ensure that individuals with symptoms are not falsely reassured.

  4. End-To-End Simulation of Launch Vehicle Trajectories Including Stage Separation Dynamics

    NASA Technical Reports Server (NTRS)

    Albertson, Cindy W.; Tartabini, Paul V.; Pamadi, Bandu N.

    2012-01-01

    The development of methodologies, techniques, and tools for analysis and simulation of stage separation dynamics is critically needed for successful design and operation of multistage reusable launch vehicles. As a part of this activity, the Constraint Force Equation (CFE) methodology was developed and implemented in the Program to Optimize Simulated Trajectories II (POST2). The objective of this paper is to demonstrate the capability of POST2/CFE to simulate a complete end-to-end mission. The vehicle configuration selected was the Two-Stage-To-Orbit (TSTO) Langley Glide Back Booster (LGBB) bimese configuration, an in-house concept consisting of a reusable booster and an orbiter having identical outer mold lines. The proximity and isolated aerodynamic databases used for the simulation were assembled using wind-tunnel test data for this vehicle. POST2/CFE simulation results are presented for the entire mission, from lift-off, through stage separation, orbiter ascent to orbit, and booster glide back to the launch site. Additionally, POST2/CFE stage separation simulation results are compared with results from industry standard commercial software used for solving dynamics problems involving multiple bodies connected by joints.

  5. A two-stage stochastic rule-based model to determine pre-assembly buffer content

    NASA Astrophysics Data System (ADS)

    Gunay, Elif Elcin; Kula, Ufuk

    2018-01-01

    This study considers instant decision-making needs of the automobile manufactures for resequencing vehicles before final assembly (FA). We propose a rule-based two-stage stochastic model to determine the number of spare vehicles that should be kept in the pre-assembly buffer to restore the altered sequence due to paint defects and upstream department constraints. First stage of the model decides the spare vehicle quantities, where the second stage model recovers the scrambled sequence respect to pre-defined rules. The problem is solved by sample average approximation (SAA) algorithm. We conduct a numerical study to compare the solutions of heuristic model with optimal ones and provide following insights: (i) as the mismatch between paint entrance and scheduled sequence decreases, the rule-based heuristic model recovers the scrambled sequence as good as the optimal resequencing model, (ii) the rule-based model is more sensitive to the mismatch between the paint entrance and scheduled sequences for recovering the scrambled sequence, (iii) as the defect rate increases, the difference in recovery effectiveness between rule-based heuristic and optimal solutions increases, (iv) as buffer capacity increases, the recovery effectiveness of the optimization model outperforms heuristic model, (v) as expected the rule-based model holds more inventory than the optimization model.

  6. Combining evidence from multiple electronic health care databases: performances of one-stage and two-stage meta-analysis in matched case-control studies.

    PubMed

    La Gamba, Fabiola; Corrao, Giovanni; Romio, Silvana; Sturkenboom, Miriam; Trifirò, Gianluca; Schink, Tania; de Ridder, Maria

    2017-10-01

    Clustering of patients in databases is usually ignored in one-stage meta-analysis of multi-database studies using matched case-control data. The aim of this study was to compare bias and efficiency of such a one-stage meta-analysis with a two-stage meta-analysis. First, we compared the approaches by generating matched case-control data under 5 simulated scenarios, built by varying: (1) the exposure-outcome association; (2) its variability among databases; (3) the confounding strength of one covariate on this association; (4) its variability; and (5) the (heterogeneous) confounding strength of two covariates. Second, we made the same comparison using empirical data from the ARITMO project, a multiple database study investigating the risk of ventricular arrhythmia following the use of medications with arrhythmogenic potential. In our study, we specifically investigated the effect of current use of promethazine. Bias increased for one-stage meta-analysis with increasing (1) between-database variance of exposure effect and (2) heterogeneous confounding generated by two covariates. The efficiency of one-stage meta-analysis was slightly lower than that of two-stage meta-analysis for the majority of investigated scenarios. Based on ARITMO data, there were no evident differences between one-stage (OR = 1.50, CI = [1.08; 2.08]) and two-stage (OR = 1.55, CI = [1.12; 2.16]) approaches. When the effect of interest is heterogeneous, a one-stage meta-analysis ignoring clustering gives biased estimates. Two-stage meta-analysis generates estimates at least as accurate and precise as one-stage meta-analysis. However, in a study using small databases and rare exposures and/or outcomes, a correct one-stage meta-analysis becomes essential. Copyright © 2017 John Wiley & Sons, Ltd.

  7. Dynamic detection-rate-based bit allocation with genuine interval concealment for binary biometric representation.

    PubMed

    Lim, Meng-Hui; Teoh, Andrew Beng Jin; Toh, Kar-Ann

    2013-06-01

    Biometric discretization is a key component in biometric cryptographic key generation. It converts an extracted biometric feature vector into a binary string via typical steps such as segmentation of each feature element into a number of labeled intervals, mapping of each interval-captured feature element onto a binary space, and concatenation of the resulted binary output of all feature elements into a binary string. Currently, the detection rate optimized bit allocation (DROBA) scheme is one of the most effective biometric discretization schemes in terms of its capability to assign binary bits dynamically to user-specific features with respect to their discriminability. However, we learn that DROBA suffers from potential discriminative feature misdetection and underdiscretization in its bit allocation process. This paper highlights such drawbacks and improves upon DROBA based on a novel two-stage algorithm: 1) a dynamic search method to efficiently recapture such misdetected features and to optimize the bit allocation of underdiscretized features and 2) a genuine interval concealment technique to alleviate crucial information leakage resulted from the dynamic search. Improvements in classification accuracy on two popular face data sets vindicate the feasibility of our approach compared with DROBA.

  8. Structure design of and experimental research on a two-stage laval foam breaker for foam fluid recycling.

    PubMed

    Wang, Jin-song; Cao, Pin-lu; Yin, Kun

    2015-07-01

    Environmental, economical and efficient antifoaming technology is the basis for achievement of foam drilling fluid recycling. The present study designed a novel two-stage laval mechanical foam breaker that primarily uses vacuum generated by Coanda effect and Laval principle to break foam. Numerical simulation results showed that the value and distribution of negative pressure of two-stage laval foam breaker were larger than that of the normal foam breaker. Experimental results showed that foam-breaking efficiency of two-stage laval foam breaker was higher than that of normal foam breaker, when gas-to-liquid ratio and liquid flow rate changed. The foam-breaking efficiency of normal foam breaker decreased rapidly with increasing foam stability, whereas the two-stage laval foam breaker remained unchanged. Foam base fluid would be recycled using two-stage laval foam breaker, which would reduce the foam drilling cost sharply and waste disposals that adverse by affect the environment.

  9. The Importance of Detailed Component Simulations in the Feedsystem Development for a Two-Stage-to Orbit Reusable Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Mazurkivich, Pete; Chandler, Frank; Grayson, Gary

    2005-01-01

    To meet the requirements for the 2nd Generation Reusable Launch Vehicle (RLV), a unique propulsion feed system concept was identified using crossfeed between the booster and orbiter stages that could reduce the Two-Stage-to-Orbit (TSTO) vehicle weight and development cost by approximately 25%. A Main Propulsion System (MPS) crossfeed water demonstration test program was configured to address all the activities required to reduce the risks for the MPS crossfeed system. A transient, one-dimensional system simulation was developed for the subscale crossfeed water flow tests. To ensure accurate representation of the crossfeed valve's dynamics in the system model, a high-fidelity, three-dimensional, computational fluid-dynamics (CFD) model was employed. The results from the CFD model were used to specify the valve's flow characteristics in the system simulation. This yielded a crossfeed system model that was anchored to the specific valve hardware and achieved good agreement with the measured test data. These results allowed the transient models to be correlated and validated and used for full scale mission predictions. The full scale model simulations indicate crossfeed is ' viable with the system pressure disturbances at the crossfeed transition being less than experienced by the propulsion system during engine start and shutdown transients.

  10. An approach for sample size determination of average bioequivalence based on interval estimation.

    PubMed

    Chiang, Chieh; Hsiao, Chin-Fu

    2017-03-30

    In 1992, the US Food and Drug Administration declared that two drugs demonstrate average bioequivalence (ABE) if the log-transformed mean difference of pharmacokinetic responses lies in (-0.223, 0.223). The most widely used approach for assessing ABE is the two one-sided tests procedure. More specifically, ABE is concluded when a 100(1 - 2α) % confidence interval for mean difference falls within (-0.223, 0.223). As known, bioequivalent studies are usually conducted by crossover design. However, in the case that the half-life of a drug is long, a parallel design for the bioequivalent study may be preferred. In this study, a two-sided interval estimation - such as Satterthwaite's, Cochran-Cox's, or Howe's approximations - is used for assessing parallel ABE. We show that the asymptotic joint distribution of the lower and upper confidence limits is bivariate normal, and thus the sample size can be calculated based on the asymptotic power so that the confidence interval falls within (-0.223, 0.223). Simulation studies also show that the proposed method achieves sufficient empirical power. A real example is provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  11. A measure of uncertainty regarding the interval constraint of normal mean elicited by two stages of a prior hierarchy.

    PubMed

    Kim, Hea-Jung

    2014-01-01

    This paper considers a hierarchical screened Gaussian model (HSGM) for Bayesian inference of normal models when an interval constraint in the mean parameter space needs to be incorporated in the modeling but when such a restriction is uncertain. An objective measure of the uncertainty, regarding the interval constraint, accounted for by using the HSGM is proposed for the Bayesian inference. For this purpose, we drive a maximum entropy prior of the normal mean, eliciting the uncertainty regarding the interval constraint, and then obtain the uncertainty measure by considering the relationship between the maximum entropy prior and the marginal prior of the normal mean in HSGM. Bayesian estimation procedure of HSGM is developed and two numerical illustrations pertaining to the properties of the uncertainty measure are provided.

  12. DEVELOPMENT OF COLD CLIMATE HEAT PUMP USING TWO-STAGE COMPRESSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shen, Bo; Rice, C Keith; Abdelaziz, Omar

    2015-01-01

    This paper uses a well-regarded, hardware based heat pump system model to investigate a two-stage economizing cycle for cold climate heat pump applications. The two-stage compression cycle has two variable-speed compressors. The high stage compressor was modelled using a compressor map, and the low stage compressor was experimentally studied using calorimeter testing. A single-stage heat pump system was modelled as the baseline. The system performance predictions are compared between the two-stage and single-stage systems. Special considerations for designing a cold climate heat pump are addressed at both the system and component levels.

  13. Two-stage neural-network-based technique for Urdu character two-dimensional shape representation, classification, and recognition

    NASA Astrophysics Data System (ADS)

    Megherbi, Dalila B.; Lodhi, S. M.; Boulenouar, A. J.

    2001-03-01

    This work is in the field of automated document processing. This work addresses the problem of representation and recognition of Urdu characters using Fourier representation and a Neural Network architecture. In particular, we show that a two-stage Neural Network scheme is used here to make classification of 36 Urdu characters into seven sub-classes namely subclasses characterized by seven proposed and defined fuzzy features specifically related to Urdu characters. We show that here Fourier Descriptors and Neural Network provide a remarkably simple way to draw definite conclusions from vague, ambiguous, noisy or imprecise information. In particular, we illustrate the concept of interest regions and describe a framing method that provides a way to make the proposed technique for Urdu characters recognition robust and invariant to scaling and translation. We also show that a given character rotation is dealt with by using the Hotelling transform. This transform is based upon the eigenvalue decomposition of the covariance matrix of an image, providing a method of determining the orientation of the major axis of an object within an image. Finally experimental results are presented to show the power and robustness of the proposed two-stage Neural Network based technique for Urdu character recognition, its fault tolerance, and high recognition accuracy.

  14. SU-E-J-128: Two-Stage Atlas Selection in Multi-Atlas-Based Image Segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, T; Ruan, D

    2015-06-15

    Purpose: In the new era of big data, multi-atlas-based image segmentation is challenged by heterogeneous atlas quality and high computation burden from extensive atlas collection, demanding efficient identification of the most relevant atlases. This study aims to develop a two-stage atlas selection scheme to achieve computational economy with performance guarantee. Methods: We develop a low-cost fusion set selection scheme by introducing a preliminary selection to trim full atlas collection into an augmented subset, alleviating the need for extensive full-fledged registrations. More specifically, fusion set selection is performed in two successive steps: preliminary selection and refinement. An augmented subset is firstmore » roughly selected from the whole atlas collection with a simple registration scheme and the corresponding preliminary relevance metric; the augmented subset is further refined into the desired fusion set size, using full-fledged registration and the associated relevance metric. The main novelty of this work is the introduction of an inference model to relate the preliminary and refined relevance metrics, based on which the augmented subset size is rigorously derived to ensure the desired atlases survive the preliminary selection with high probability. Results: The performance and complexity of the proposed two-stage atlas selection method were assessed using a collection of 30 prostate MR images. It achieved comparable segmentation accuracy as the conventional one-stage method with full-fledged registration, but significantly reduced computation time to 1/3 (from 30.82 to 11.04 min per segmentation). Compared with alternative one-stage cost-saving approach, the proposed scheme yielded superior performance with mean and medium DSC of (0.83, 0.85) compared to (0.74, 0.78). Conclusion: This work has developed a model-guided two-stage atlas selection scheme to achieve significant cost reduction while guaranteeing high segmentation accuracy. The

  15. Identifying Issues and Concerns with the Use of Interval-Based Systems in Single Case Research Using a Pilot Simulation Study

    ERIC Educational Resources Information Center

    Ledford, Jennifer R.; Ayres, Kevin M.; Lane, Justin D.; Lam, Man Fung

    2015-01-01

    Momentary time sampling (MTS), whole interval recording (WIR), and partial interval recording (PIR) are commonly used in applied research. We discuss potential difficulties with analyzing data when these systems are used and present results from a pilot simulation study designed to determine the extent to which these issues are likely to be…

  16. Factors affecting duration of the expulsive stage of parturition and piglet birth intervals in sows with uncomplicated, spontaneous farrowings.

    PubMed

    van Dijk, A J; van Rens, B T T M; van der Lende, T; Taverne, M A M

    2005-10-15

    Modern pig farming is still confronted with high perinatal piglet losses which are mainly contributed to factors associated with the progress of piglet expulsion. Therefore the aim of this study was to identify sow- and piglet factors affecting the duration of the expulsive stage of farrowing and piglet birth intervals in spontaneous farrowing sows originating from five different breeds. In total 211 litters were investigated. Breed affected duration of the expulsive stage significantly: the shortest duration was found in Large White x Meishan F2 crossbred litters and the longest duration in Dutch Landrace litters. No effect of parity on the duration of the expulsive stage was found. An increase in littersize (P<0.01), an increase in number of stillborn piglets per litter (P<0.05) and a decrease of gestation length (P<0.05, independently of littersize) all resulted in an increased duration of the expulsive stage of farrowing. A curvilinear relationship between birth interval and rank (relative position in the birth order) of the piglets was found. Besides that, piglet birth intervals increased with an increasing birth weight (P<0.001). Stillborn (P<0.01) and posteriorly presented (P<0.05) piglets were delivered after significantly longer birth intervals than liveborn and anteriorly presented piglets. The results on sow- and piglet factors affecting duration of the expulsive stage and piglet birth intervals obtained in this study contribute to an increased insight into (patho) physiological aspects of perinatal mortality in pigs.

  17. One- and two-stage Arrhenius models for pharmaceutical shelf life prediction.

    PubMed

    Fan, Zhewen; Zhang, Lanju

    2015-01-01

    One of the most challenging aspects of the pharmaceutical development is the demonstration and estimation of chemical stability. It is imperative that pharmaceutical products be stable for two or more years. Long-term stability studies are required to support such shelf life claim at registration. However, during drug development to facilitate formulation and dosage form selection, an accelerated stability study with stressed storage condition is preferred to quickly obtain a good prediction of shelf life under ambient storage conditions. Such a prediction typically uses Arrhenius equation that describes relationship between degradation rate and temperature (and humidity). Existing methods usually rely on the assumption of normality of the errors. In addition, shelf life projection is usually based on confidence band of a regression line. However, the coverage probability of a method is often overlooked or under-reported. In this paper, we introduce two nonparametric bootstrap procedures for shelf life estimation based on accelerated stability testing, and compare them with a one-stage nonlinear Arrhenius prediction model. Our simulation results demonstrate that one-stage nonlinear Arrhenius method has significant lower coverage than nominal levels. Our bootstrap method gave better coverage and led to a shelf life prediction closer to that based on long-term stability data.

  18. Determination of prospective displacement-based gate threshold for respiratory-gated radiation delivery from retrospective phase-based gate threshold selected at 4D CT simulation.

    PubMed

    Vedam, S; Archambault, L; Starkschall, G; Mohan, R; Beddar, S

    2007-11-01

    Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the delivery gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of simulation

  19. A two-stage spectrum sensing scheme based on energy detection and a novel multitaper method

    NASA Astrophysics Data System (ADS)

    Qi, Pei-Han; Li, Zan; Si, Jiang-Bo; Xiong, Tian-Yi

    2015-04-01

    Wideband spectrum sensing has drawn much attention in recent years since it provides more opportunities to the secondary users. However, wideband spectrum sensing requires a long time and a complex mechanism at the sensing terminal. A two-stage wideband spectrum sensing scheme is considered to proceed spectrum sensing with low time consumption and high performance to tackle this predicament. In this scheme, a novel multitaper spectrum sensing (MSS) method is proposed to mitigate the poor performance of energy detection (ED) in the low signal-to-noise ratio (SNR) region. The closed-form expression of the decision threshold is derived based on the Neyman-Pearson criterion and the probability of detection in the Rayleigh fading channel is analyzed. An optimization problem is formulated to maximize the probability of detection of the proposed two-stage scheme and the average sensing time of the two-stage scheme is analyzed. Numerical results validate the efficiency of MSS and show that the two-stage spectrum sensing scheme enjoys higher performance in the low SNR region and lower time cost in the high SNR region than the single-stage scheme. Project supported by the National Natural Science Foundation of China (Grant No. 61301179), the China Postdoctoral Science Foundation (Grant No. 2014M550479), and the Doctorial Programs Foundation of the Ministry of Education, China (Grant No. 20110203110011).

  20. A Two-Stage Method to Determine Optimal Product Sampling considering Dynamic Potential Market

    PubMed Central

    Hu, Zhineng; Lu, Wei; Han, Bing

    2015-01-01

    This paper develops an optimization model for the diffusion effects of free samples under dynamic changes in potential market based on the characteristics of independent product and presents a two-stage method to figure out the sampling level. The impact analysis of the key factors on the sampling level shows that the increase of the external coefficient or internal coefficient has a negative influence on the sampling level. And the changing rate of the potential market has no significant influence on the sampling level whereas the repeat purchase has a positive one. Using logistic analysis and regression analysis, the global sensitivity analysis gives a whole analysis of the interaction of all parameters, which provides a two-stage method to estimate the impact of the relevant parameters in the case of inaccuracy of the parameters and to be able to construct a 95% confidence interval for the predicted sampling level. Finally, the paper provides the operational steps to improve the accuracy of the parameter estimation and an innovational way to estimate the sampling level. PMID:25821847

  1. Ultrasound-based follow-up does not increase survival in early-stage melanoma patients: A comparative cohort study.

    PubMed

    Ribero, S; Podlipnik, S; Osella-Abate, S; Sportoletti-Baduel, E; Manubens, E; Barreiro, A; Caliendo, V; Chavez-Bourgeois, M; Carrera, C; Cassoni, P; Malvehy, J; Fierro, M T; Puig, S

    2017-11-01

    Different protocols have been used to follow up melanoma patients in stage I-II. However, there is no consensus on the complementary tests that should be requested or the appropriate intervals between visits. Our aim is to compare an ultrasound-based follow-up with a clinical follow-up. Analysis of two prospectively collected cohorts of melanoma patients in stage IB-IIA from two tertiary referral centres in Barcelona (clinical-based follow-up [C-FU]) and Turin (ultrasound-based follow-up [US-FU]). Kaplan-Meier curves were used to evaluate distant metastases-free survival (DMFS), disease-free interval (DFI), nodal metastases-free survival (NMFS) and melanoma-specific survival (MSS). A total of 1149 patients in the American Joint Committee on Cancer stage IB and IIA were included in this study, of which 554 subjects (48%) were enrolled for a C-FU, and 595 patients (52%) received a protocolised US-FU. The median age was 53.8 years (interquartile range [IQR] 41.5-65.2) with a median follow-up time of 4.14 years (IQR 1.2-7.6). During follow-up, 69 patients (12.5%) in C-FU and 72 patients (12.1%) in US-FU developed disease progression. Median time to relapse for the first metastatic site was 2.11 years (IQR 1.14-4.04) for skin metastases, 1.32 (IQR 0.57-3.29) for lymph node metastases and 2.84 (IQR 1.32-4.60) for distant metastases. The pattern of progression and the total proportion of metastases were not significantly different (P = .44) in the two centres. No difference in DFI, DMFS, NMFS and MSS was found between the two cohorts. Ultrasound-based follow-up does not increase the survival of melanoma patients in stage IB-IIA. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Early stages of the recovery stroke in myosin II studied by molecular dynamics simulations

    PubMed Central

    Baumketner, Andrij; Nesmelov, Yuri

    2011-01-01

    The recovery stroke is a key step in the functional cycle of muscle motor protein myosin, during which pre-recovery conformation of the protein is changed into the active post-recovery conformation, ready to exersice force. We study the microscopic details of this transition using molecular dynamics simulations of atomistic models in implicit and explicit solvent. In more than 2 μs of aggregate simulation time, we uncover evidence that the recovery stroke is a two-step process consisting of two stages separated by a time delay. In our simulations, we directly observe the first stage at which switch II loop closes in the presence of adenosine triphosphate at the nucleotide binding site. The resulting configuration of the nucleotide binding site is identical to that detected experimentally. Distribution of inter-residue distances measured in the force generating region of myosin is in good agreement with the experimental data. The second stage of the recovery stroke structural transition, rotation of the converter domain, was not observed in our simulations. Apparently it occurs on a longer time scale. We suggest that the two parts of the recovery stroke need to be studied using separate computational models. PMID:21922589

  3. CFD modeling of two-stage ignition in a rapid compression machine: Assessment of zero-dimensional approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mittal, Gaurav; Raju, Mandhapati P.; Sung, Chih-Jen

    2010-07-15

    In modeling rapid compression machine (RCM) experiments, zero-dimensional approach is commonly used along with an associated heat loss model. The adequacy of such approach has not been validated for hydrocarbon fuels. The existence of multi-dimensional effects inside an RCM due to the boundary layer, roll-up vortex, non-uniform heat release, and piston crevice could result in deviation from the zero-dimensional assumption, particularly for hydrocarbons exhibiting two-stage ignition and strong thermokinetic interactions. The objective of this investigation is to assess the adequacy of zero-dimensional approach in modeling RCM experiments under conditions of two-stage ignition and negative temperature coefficient (NTC) response. Computational fluidmore » dynamics simulations are conducted for n-heptane ignition in an RCM and the validity of zero-dimensional approach is assessed through comparisons over the entire NTC region. Results show that the zero-dimensional model based on the approach of 'adiabatic volume expansion' performs very well in adequately predicting the first-stage ignition delays, although quantitative discrepancy for the prediction of the total ignition delays and pressure rise in the first-stage ignition is noted even when the roll-up vortex is suppressed and a well-defined homogeneous core is retained within an RCM. Furthermore, the discrepancy is pressure dependent and decreases as compressed pressure is increased. Also, as ignition response becomes single-stage at higher compressed temperatures, discrepancy from the zero-dimensional simulations reduces. Despite of some quantitative discrepancy, the zero-dimensional modeling approach is deemed satisfactory from the viewpoint of the ignition delay simulation. (author)« less

  4. Impact of variable river water stage on the simulation of groundwater-river interactions over the Upper Rhine Graben hydrosystem

    NASA Astrophysics Data System (ADS)

    Habets, F.; Vergnes, J.

    2013-12-01

    The Upper Rhine alluvial aquifer is an important transboundary water resource which is particularly vulnerable to pollution from the rivers due to anthropogenic activities. A realistic simulation of the groundwater-river exchanges is therefore of crucial importance for effective management of water resources, and hence is the main topic of the NAPROM project financed by the French Ministry of Ecology. Characterization of these fluxes in term of quantity and spatio-temporal variability depends on the choice made to represent the river water stage in the model. Recently, a couple surface-subsurface model has been applied to the whole aquifer basin. The river stage was first chosen to be constant over the major part of the basin for the computation of the groundwater-river interactions. The present study aims to introduce a variable river water stage to better simulate these interactions and to quantify the impact of this process over the simulated hydrological variables. The general modeling strategy is based on the Eau-Dyssée modeling platform which couples existing specialized models to address water resources and quality in regional scale river basins. In this study, Eau-Dyssée includes the RAPID river routing model and the SAM hydrogeological model. The input data consist in runoff and infiltration coming from a simulation of the ISBA land surface scheme covering the 1986-2003 period. The QtoZ module allows to calculate river stage from simulated river discharges, which is then used to calculate the exchanges between aquifer units and river. Two approaches are compared. The first one uses rating curves derived from observed river discharges and river stages. The second one is based on the Manning's formula. Manning's parameters are defined with geomorphological parametrizations and topographic data based on Digital Elevation Model (DEM). First results show a relatively good agreement between observed and simulated river water height. Taking into account a

  5. An adaptive two-stage dose-response design method for establishing proof of concept.

    PubMed

    Franchetti, Yoko; Anderson, Stewart J; Sampson, Allan R

    2013-01-01

    We propose an adaptive two-stage dose-response design where a prespecified adaptation rule is used to add and/or drop treatment arms between the stages. We extend the multiple comparison procedures-modeling (MCP-Mod) approach into a two-stage design. In each stage, we use the same set of candidate dose-response models and test for a dose-response relationship or proof of concept (PoC) via model-associated statistics. The stage-wise test results are then combined to establish "global" PoC using a conditional error function. Our simulation studies showed good and more robust power in our design method compared to conventional and fixed designs.

  6. Neutron coincidence counting based on time interval analysis with one- and two-dimensional Rossi-alpha distributions: an application for passive neutron waste assay

    NASA Astrophysics Data System (ADS)

    Bruggeman, M.; Baeten, P.; De Boeck, W.; Carchon, R.

    1996-02-01

    Neutron coincidence counting is commonly used for the non-destructive assay of plutonium bearing waste or for safeguards verification measurements. A major drawback of conventional coincidence counting is related to the fact that a valid calibration is needed to convert a neutron coincidence count rate to a 240Pu equivalent mass ( 240Pu eq). In waste assay, calibrations are made for representative waste matrices and source distributions. The actual waste however may have quite different matrices and source distributions compared to the calibration samples. This often results in a bias of the assay result. This paper presents a new neutron multiplicity sensitive coincidence counting technique including an auto-calibration of the neutron detection efficiency. The coincidence counting principle is based on the recording of one- and two-dimensional Rossi-alpha distributions triggered respectively by pulse pairs and by pulse triplets. Rossi-alpha distributions allow an easy discrimination between real and accidental coincidences and are aimed at being measured by a PC-based fast time interval analyser. The Rossi-alpha distributions can be easily expressed in terms of a limited number of factorial moments of the neutron multiplicity distributions. The presented technique allows an unbiased measurement of the 240Pu eq mass. The presented theory—which will be indicated as Time Interval Analysis (TIA)—is complementary to Time Correlation Analysis (TCA) theories which were developed in the past, but is from the theoretical point of view much simpler and allows a straightforward calculation of deadtime corrections and error propagation. Analytical expressions are derived for the Rossi-alpha distributions as a function of the factorial moments of the efficiency dependent multiplicity distributions. The validity of the proposed theory is demonstrated and verified via Monte Carlo simulations of pulse trains and the subsequent analysis of the simulated data.

  7. Assessing the Accuracy of Classwide Direct Observation Methods: Two Analyses Using Simulated and Naturalistic Data

    ERIC Educational Resources Information Center

    Dart, Evan H.; Radley, Keith C.; Briesch, Amy M.; Furlow, Christopher M.; Cavell, Hannah J.; Briesch, Amy M.

    2016-01-01

    Two studies investigated the accuracy of eight different interval-based group observation methods that are commonly used to assess the effects of classwide interventions. In Study 1, a Microsoft Visual Basic program was created to simulate a large set of observational data. Binary data were randomly generated at the student level to represent…

  8. Investigation on a thermal-coupled two-stage Stirling-type pulse tube cryocooler

    NASA Astrophysics Data System (ADS)

    Yang, Luwei

    2008-11-01

    Multi-stage Stirling-type pulse tube cryocoolers with high frequency (30-60 Hz) are one important direction in recent years. A two-stage Stirling-type pulse tube cryocooler with thermally coupled stages has been designed and established two years ago and some results have been published. In order to study the effect of first stage precooling temperature, related characteristics on performance are experimentally investigated. It shows that at high input power, when the precooling temperature is lower than 110 K, its effect on second stage temperature is quite small. There is also the evident effect of precooling temperature on pulse tube temperature distribution; this is for the first time that author notice the phenomenon. The mean working pressure is investigated and the 12.8 K lowest temperature with 500 W input power and 1.22 MPa average pressure have been gained, this is the lowest reported temperature for high frequency two-stage PTCS. Simulation has reflected upper mentioned typical features in experiments.

  9. Determination of prospective displacement-based gate threshold for respiratory-gated radiation delivery from retrospective phase-based gate threshold selected at 4D CT simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedam, S.; Archambault, L.; Starkschall, G.

    2007-11-15

    Four-dimensional (4D) computed tomography (CT) imaging has found increasing importance in the localization of tumor and surrounding normal structures throughout the respiratory cycle. Based on such tumor motion information, it is possible to identify the appropriate phase interval for respiratory gated treatment planning and delivery. Such a gating phase interval is determined retrospectively based on tumor motion from internal tumor displacement. However, respiratory-gated treatment is delivered prospectively based on motion determined predominantly from an external monitor. Therefore, the simulation gate threshold determined from the retrospective phase interval selected for gating at 4D CT simulation may not correspond to the deliverymore » gate threshold that is determined from the prospective external monitor displacement at treatment delivery. The purpose of the present work is to establish a relationship between the thresholds for respiratory gating determined at CT simulation and treatment delivery, respectively. One hundred fifty external respiratory motion traces, from 90 patients, with and without audio-visual biofeedback, are analyzed. Two respiratory phase intervals, 40%-60% and 30%-70%, are chosen for respiratory gating from the 4D CT-derived tumor motion trajectory. From residual tumor displacements within each such gating phase interval, a simulation gate threshold is defined based on (a) the average and (b) the maximum respiratory displacement within the phase interval. The duty cycle for prospective gated delivery is estimated from the proportion of external monitor displacement data points within both the selected phase interval and the simulation gate threshold. The delivery gate threshold is then determined iteratively to match the above determined duty cycle. The magnitude of the difference between such gate thresholds determined at simulation and treatment delivery is quantified in each case. Phantom motion tests yielded coincidence of

  10. Two-Stage Fracturing Wastewater Management in Shale Gas Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaodong; Sun, Alexander Y.; Duncan, Ian J.

    Here, management of shale gas wastewater treatment, disposal, and reuse has become a significant environmental challenge, driven by an ongoing boom in development of U.S. shale gas reservoirs. Systems-analysis based decision support is helpful for effective management of wastewater, and provision of cost-effective decision alternatives from a whole-system perspective. Uncertainties are inherent in many modeling parameters, affecting the generated decisions. In order to effectively deal with the recourse issue in decision making, in this work a two-stage stochastic fracturing wastewater management model, named TSWM, is developed to provide decision support for wastewater management planning in shale plays. Using the TSWMmore » model, probabilistic and nonprobabilistic uncertainties are effectively handled. The TSWM model provides flexibility in generating shale gas wastewater management strategies, in which the first-stage decision predefined by decision makers before uncertainties are unfolded is corrected in the second stage to achieve the whole-system’s optimality. Application of the TSWM model to a comprehensive synthetic example demonstrates its practical applicability and feasibility. Optimal results are generated for allowable wastewater quantities, excess wastewater, and capacity expansions of hazardous wastewater treatment plants to achieve the minimized total system cost. The obtained interval solutions encompass both optimistic and conservative decisions. Trade-offs between economic and environmental objectives are made depending on decision makers’ knowledge and judgment, as well as site-specific information. In conclusion, the proposed model is helpful in forming informed decisions for wastewater management associated with shale gas development.« less

  11. Two-Stage Fracturing Wastewater Management in Shale Gas Development

    DOE PAGES

    Zhang, Xiaodong; Sun, Alexander Y.; Duncan, Ian J.; ...

    2017-01-19

    Here, management of shale gas wastewater treatment, disposal, and reuse has become a significant environmental challenge, driven by an ongoing boom in development of U.S. shale gas reservoirs. Systems-analysis based decision support is helpful for effective management of wastewater, and provision of cost-effective decision alternatives from a whole-system perspective. Uncertainties are inherent in many modeling parameters, affecting the generated decisions. In order to effectively deal with the recourse issue in decision making, in this work a two-stage stochastic fracturing wastewater management model, named TSWM, is developed to provide decision support for wastewater management planning in shale plays. Using the TSWMmore » model, probabilistic and nonprobabilistic uncertainties are effectively handled. The TSWM model provides flexibility in generating shale gas wastewater management strategies, in which the first-stage decision predefined by decision makers before uncertainties are unfolded is corrected in the second stage to achieve the whole-system’s optimality. Application of the TSWM model to a comprehensive synthetic example demonstrates its practical applicability and feasibility. Optimal results are generated for allowable wastewater quantities, excess wastewater, and capacity expansions of hazardous wastewater treatment plants to achieve the minimized total system cost. The obtained interval solutions encompass both optimistic and conservative decisions. Trade-offs between economic and environmental objectives are made depending on decision makers’ knowledge and judgment, as well as site-specific information. In conclusion, the proposed model is helpful in forming informed decisions for wastewater management associated with shale gas development.« less

  12. Optimization of Boiling Water Reactor Loading Pattern Using Two-Stage Genetic Algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Yoko; Aiyoshi, Eitaro

    2002-10-15

    A new two-stage optimization method based on genetic algorithms (GAs) using an if-then heuristic rule was developed to generate optimized boiling water reactor (BWR) loading patterns (LPs). In the first stage, the LP is optimized using an improved GA operator. In the second stage, an exposure-dependent control rod pattern (CRP) is sought using GA with an if-then heuristic rule. The procedure of the improved GA is based on deterministic operators that consist of crossover, mutation, and selection. The handling of the encoding technique and constraint conditions by that GA reflects the peculiar characteristics of the BWR. In addition, strategies suchmore » as elitism and self-reproduction are effectively used in order to improve the search speed. The LP evaluations were performed with a three-dimensional diffusion code that coupled neutronic and thermal-hydraulic models. Strong axial heterogeneities and constraints dependent on three dimensions have always necessitated the use of three-dimensional core simulators for BWRs, so that optimization of computational efficiency is required. The proposed algorithm is demonstrated by successfully generating LPs for an actual BWR plant in two phases. One phase is only LP optimization applying the Haling technique. The other phase is an LP optimization that considers the CRP during reactor operation. In test calculations, candidates that shuffled fresh and burned fuel assemblies within a reasonable computation time were obtained.« less

  13. Interval-based reconstruction for uncertainty quantification in PET

    NASA Astrophysics Data System (ADS)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  14. Optics of two-stage photovoltaic concentrators with dielectric second stages.

    PubMed

    Ning, X; O'Gallagher, J; Winston, R

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  15. Optics of two-stage photovoltaic concentrators with dielectric second stages

    NASA Astrophysics Data System (ADS)

    Ning, Xiaohui; O'Gallagher, Joseph; Winston, Roland

    1987-04-01

    Two-stage photovoltaic concentrators with Fresnel lenses as primaries and dielectric totally internally reflecting nonimaging concentrators as secondaries are discussed. The general design principles of such two-stage systems are given. Their optical properties are studied and analyzed in detail using computer ray trace procedures. It is found that the two-stage concentrator offers not only a higher concentration or increased acceptance angle, but also a more uniform flux distribution on the photovoltaic cell than the point focusing Fresnel lens alone. Experimental measurements with a two-stage prototype module are presented and compared to the analytical predictions.

  16. Acceptability of Flight Deck-Based Interval Management Crew Procedures

    NASA Technical Reports Server (NTRS)

    Murdock, Jennifer L.; Wilson, Sara R.; Hubbs, Clay E.; Smail, James W.

    2013-01-01

    The Interval Management for Near-term Operations Validation of Acceptability (IM-NOVA) experiment was conducted at the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC) in support of the NASA Next Generation Air Transportation System (NextGen) Airspace Systems Program's Air Traffic Management Technology Demonstration - 1 (ATD-1). ATD-1 is intended to showcase an integrated set of technologies that provide an efficient arrival solution for managing aircraft using NextGen surveillance, navigation, procedures, and automation for both airborne and ground-based systems. The goal of the IM-NOVA experiment was to assess if procedures outlined by the ATD-1 Concept of Operations, when used with a minimum set of Flight deck-based Interval Management (FIM) equipment and a prototype crew interface, were acceptable to and feasible for use by flight crews in a voice communications environment. To investigate an integrated arrival solution using ground-based air traffic control tools and aircraft automatic dependent surveillance broadcast (ADS-B) tools, the LaRC FIM system and the Traffic Management Advisor with Terminal Metering and Controller Managed Spacing tools developed at the NASA Ames Research Center (ARC) were integrated in LaRC's Air Traffic Operations Laboratory. Data were collected from 10 crews of current, qualified 757/767 pilots asked to fly a high-fidelity, fixed based simulator during scenarios conducted within an airspace environment modeled on the Dallas-Fort Worth (DFW) Terminal Radar Approach Control area. The aircraft simulator was equipped with the Airborne Spacing for Terminal Area Routes algorithm and a FIM crew interface consisting of electronic flight bags and ADS-B guidance displays. Researchers used "pseudo-pilot" stations to control 24 simulated aircraft that provided multiple air traffic flows into DFW, and recently retired DFW air traffic controllers served as confederate Center, Feeder, Final, and Tower

  17. Single-stage-to-orbit versus two-stage-two-orbit: A cost perspective

    NASA Astrophysics Data System (ADS)

    Hamaker, Joseph W.

    1996-03-01

    This paper considers the possible life-cycle costs of single-stage-to-orbit (SSTO) and two-stage-to-orbit (TSTO) reusable launch vehicles (RLV's). The analysis parametrically addresses the issue such that the preferred economic choice comes down to the relative complexity of the TSTO compared to the SSTO. The analysis defines the boundary complexity conditions at which the two configurations have equal life-cycle costs, and finally, makes a case for the economic preference of SSTO over TSTO.

  18. A two-stage path planning approach for multiple car-like robots based on PH curves and a modified harmony search algorithm

    NASA Astrophysics Data System (ADS)

    Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun

    2017-11-01

    In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.

  19. A Two-Stage Probabilistic Approach to Manage Personal Worklist in Workflow Management Systems

    NASA Astrophysics Data System (ADS)

    Han, Rui; Liu, Yingbo; Wen, Lijie; Wang, Jianmin

    The application of workflow scheduling in managing individual actor's personal worklist is one area that can bring great improvement to business process. However, current deterministic work cannot adapt to the dynamics and uncertainties in the management of personal worklist. For such an issue, this paper proposes a two-stage probabilistic approach which aims at assisting actors to flexibly manage their personal worklists. To be specific, the approach analyzes every activity instance's continuous probability of satisfying deadline at the first stage. Based on this stochastic analysis result, at the second stage, an innovative scheduling strategy is proposed to minimize the overall deadline violation cost for an actor's personal worklist. Simultaneously, the strategy recommends the actor a feasible worklist of activity instances which meet the required bottom line of successful execution. The effectiveness of our approach is evaluated in a real-world workflow management system and with large scale simulation experiments.

  20. Time between the first and second operations for staged total knee arthroplasties when the interval is determined by the patient.

    PubMed

    Ishii, Yoshinori; Noguchi, Hideo; Takeda, Mitsuhiro; Sato, Junko; Toyabe, Shin-Ichi

    2014-01-01

    The purpose of this study was to evaluate the interval between the first and second operations for staged total knee arthroplasties (TKAs) in patients with bilateral knee osteoarthritis. Depending on satisfactory preoperative health status, the patients determined the timing of the second operation. We also analysed correlations between the interval and patient characteristics. Eighty-six patients with bilateral knee osteoarthritis were analysed. The mean follow-up time from the first TKA was 96 months. The side of the first TKA was chosen by the patients. The timing of the second TKA was determined by the patients, depending on their perceived ability to tolerate the additional pain and limitations to activities of daily living. The median interval between the first and second operations was 12.5 months, with a range of 2 to 113 months. In 43 (50%) patients, the interval was <12 months. There was no difference in the interval between females and males (p=0.861), and no correlation between the interval and body mass index or age. There was weak correlation between the year of the first TKA and the interval (R=-0.251, p=0.020), with the interval getting significantly shorter as the years progressed (p=0.032). The median interval between the first and second operations in patients who underwent staged TKAs for bilateral knee osteoarthritis was about 1 year. The results of the current study may help patients and physicians to plan effective treatment strategies for staged TKAs. Level II. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study.

    PubMed

    Kaplan, David; Chen, Jianshen

    2012-07-01

    A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.

  2. Multi-stage FE simulation of hot ring rolling

    NASA Astrophysics Data System (ADS)

    Wang, C.; Geijselaers, H. J. M.; van den Boogaard, A. H.

    2013-05-01

    As a unique and important member of the metal forming family, ring rolling provides a cost effective process route to manufacture seamless rings. Applications of ring rolling cover a wide range of products in aerospace, automotive and civil engineering industries [1]. Above the recrystallization temperature of the material, hot ring rolling begins with the upsetting of the billet cut from raw stock. Next a punch pierces the hot upset billet to form a hole through the billet. This billet, referred to as preform, is then rolled by the ring rolling mill. For an accurate simulation of hot ring rolling, it is crucial to include the deformations, stresses and strains from the upsetting and piercing process as initial conditions for the rolling stage. In this work, multi-stage FE simulations of hot ring rolling process were performed by mapping the local deformation state of the workpiece from one step to the next one. The simulations of upsetting and piercing stages were carried out by 2D axisymmetric models using adaptive remeshing and element erosion. The workpiece for the ring rolling stage was subsequently obtained after performing a 2D to 3D mapping. The commercial FE package LS-DYNA was used for the study and user defined subroutines were implemented to complete the control algorithm. The simulation results were analyzed and also compared with those from the single-stage FE model of hot ring rolling.

  3. Comparative assessment of single-stage and two-stage anaerobic digestion for the treatment of thin stillage.

    PubMed

    Nasr, Noha; Elbeshbishy, Elsayed; Hafez, Hisham; Nakhla, George; El Naggar, M Hesham

    2012-05-01

    A comparative evaluation of single-stage and two-stage anaerobic digestion processes for biomethane and biohydrogen production using thin stillage was performed to assess the impact of separating the acidogenic and methanogenic stages on anaerobic digestion. Thin stillage, the main by-product from ethanol production, was characterized by high total chemical oxygen demand (TCOD) of 122 g/L and total volatile fatty acids (TVFAs) of 12 g/L. A maximum methane yield of 0.33 L CH(4)/gCOD(added) (STP) was achieved in the two-stage process while a single-stage process achieved a maximum yield of only 0.26 L CH(4)/gCOD(added) (STP). The separation of acidification stage increased the TVFAs to TCOD ratio from 10% in the raw thin stillage to 54% due to the conversion of carbohydrates into hydrogen and VFAs. Comparison of the two processes based on energy outcome revealed that an increase of 18.5% in the total energy yield was achieved using two-stage anaerobic digestion. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Evaluating the utility of hexapod species for calculating a confidence interval about a succession based postmortem interval estimate.

    PubMed

    Perez, Anne E; Haskell, Neal H; Wells, Jeffrey D

    2014-08-01

    Carrion insect succession patterns have long been used to estimate the postmortem interval (PMI) during a death investigation. However, no published carrion succession study included sufficient replication to calculate a confidence interval about a PMI estimate based on occurrence data. We exposed 53 pig carcasses (16±2.5 kg), near the likely minimum needed for such statistical analysis, at a site in north-central Indiana, USA, over three consecutive summer seasons. Insects and Collembola were sampled daily from each carcass for a total of 14 days, by this time each was skeletonized. The criteria for judging a life stage of a given species to be potentially useful for succession-based PMI estimation were (1) nonreoccurrence (observed during a single period of presence on a corpse), and (2) found in a sufficiently large proportion of carcasses to support a PMI confidence interval. For this data set that proportion threshold is 45/53. Of the 266 species collected and identified, none was nonreoccuring in that each showed at least a gap of one day on a single carcass. If the definition of nonreoccurrence is relaxed to include such a single one-day gap the larval forms of Necrophilaamericana, Fanniascalaris, Cochliomyia macellaria, Phormiaregina, and Luciliaillustris satisfied these two criteria. Adults of Creophilus maxillosus, Necrobiaruficollis, and Necrodessurinamensis were common and showed only a few, single-day gaps in occurrence. C.maxillosus, P.regina, and L.illustris displayed exceptional forensic utility in that they were observed on every carcass. Although these observations were made at a single site during one season of the year, the species we found to be useful have large geographic ranges. We suggest that future carrion insect succession research focus only on a limited set of species with high potential forensic utility so as to reduce sample effort per carcass and thereby enable increased experimental replication. Copyright © 2014 Elsevier Ireland

  5. A Two-stage Improvement Method for Robot Based 3D Surface Scanning

    NASA Astrophysics Data System (ADS)

    He, F. B.; Liang, Y. D.; Wang, R. F.; Lin, Y. S.

    2018-03-01

    As known that the surface of unknown object was difficult to measure or recognize precisely, hence the 3D laser scanning technology was introduced and used properly in surface reconstruction. Usually, the surface scanning speed was slower and the scanning quality would be better, while the speed was faster and the quality would be worse. In this case, the paper presented a new two-stage scanning method in order to pursuit the quality of surface scanning in a faster speed. The first stage was rough scanning to get general point cloud data of object’s surface, and then the second stage was specific scanning to repair missing regions which were determined by chord length discrete method. Meanwhile, a system containing a robotic manipulator and a handy scanner was also developed to implement the two-stage scanning method, and relevant paths were planned according to minimum enclosing ball and regional coverage theories.

  6. A two-stage broadcast message propagation model in social networks

    NASA Astrophysics Data System (ADS)

    Wang, Dan; Cheng, Shun-Jun

    2016-11-01

    Message propagation in social networks is becoming a popular topic in complex networks. One of the message types in social networks is called broadcast message. It refers to a type of message which has a unique and unknown destination for the publisher, such as 'lost and found'. Its propagation always has two stages. Due to this feature, rumor propagation model and epidemic propagation model have difficulty in describing this message's propagation accurately. In this paper, an improved two-stage susceptible-infected-removed model is proposed. We come up with the concept of the first forwarding probability and the second forwarding probability. Another part of our work is figuring out the influence to the successful message transmission chance in each level resulting from multiple reasons, including the topology of the network, the receiving probability, the first stage forwarding probability, the second stage forwarding probability as well as the length of the shortest path between the publisher and the relevant destination. The proposed model has been simulated on real networks and the results proved the model's effectiveness.

  7. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste.

    PubMed

    Ariunbaatar, Javkhlan; Scotto Di Perta, Ester; Panico, Antonio; Frunzo, Luigi; Esposito, Giovanni; Lens, Piet N L; Pirozzi, Francesco

    2015-04-01

    This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuously with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Diagnosis Of Persistent Infection In Prosthetic Two-Stage Exchange: PCR analysis of Sonication fluid From Bone Cement Spacers.

    PubMed

    Mariaux, Sandrine; Tafin, Ulrika Furustrand; Borens, Olivier

    2017-01-01

    Introduction: When treating periprosthetic joint infections with a two-stage procedure, antibiotic-impregnated spacers are used in the interval between removal of prosthesis and reimplantation. According to our experience, cultures of sonicated spacers are most often negative. The objective of our study was to investigate whether PCR analysis would improve the detection of bacteria in the spacer sonication fluid. Methods: A prospective monocentric study was performed from September 2014 to January 2016. Inclusion criteria were two-stage procedure for prosthetic infection and agreement of the patient to participate in the study. Beside tissues samples and sonication, broad range bacterial PCRs, specific S. aureus PCRs and Unyvero-multiplex PCRs were performed on the sonicated spacer fluid. Results: 30 patients were identified (15 hip, 14 knee and 1 ankle replacements). At reimplantation, cultures of tissue samples and spacer sonication fluid were all negative. Broad range PCRs were all negative. Specific S. aureus PCRs were positive in 5 cases. We had two persistent infections and four cases of infection recurrence were observed, with bacteria different than for the initial infection in three cases. Conclusion: The three different types of PCRs did not detect any bacteria in spacer sonication fluid that was culture-negative. In our study, PCR did not improve the bacterial detection and did not help to predict whether the patient will present a persistent or recurrent infection. Prosthetic 2-stage exchange with short interval and antibiotic-impregnated spacer is an efficient treatment to eradicate infection as both culture- and molecular-based methods were unable to detect bacteria in spacer sonication fluid after reimplantation.

  9. Emergency department injury surveillance and aetiological research: bridging the gap with the two-stage case-control study design.

    PubMed

    Hagel, Brent E

    2011-04-01

    To provide an overview of the two-stage case-control study design and its potential application to ED injury surveillance data and to apply this approach to published ED data on the relation between brain injury and bicycle helmet use. Relevant background is presented on injury aetiology and case-control methodology with extension to the two-stage case-control design in the context of ED injury surveillance. The design is then applied to data from a published case-control study of the relation between brain injury and bicycle helmet use with motor vehicle involvement considered as a potential confounder. Taking into account the additional sampling at the second stage, the adjusted and corrected odds ratio and 95% confidence interval for the brain injury-helmet use relation is presented and compared with the estimate from the entire original dataset. Contexts where the two-stage case-control study design might be most appropriately applied to ED injury surveillance data are suggested. The adjusted odds ratio for the relation between brain injury and bicycle helmet use based on all data (n = 2833) from the original study was 0.34 (95% CI 0.25 to 0.46) compared with an estimate from a two-stage case-control design of 0.35 (95% CI 0.25 to 0.48) using only a fraction of the original subjects (n = 480). Application of the two-stage case-control study design to ED injury surveillance data has the potential to dramatically reduce study time and resource costs with acceptable losses in statistical efficiency.

  10. One-stage vs. two-stage BAHA implantation in a pediatric population.

    PubMed

    Saliba, Issam; Froehlich, Patrick; Bouhabel, Sarah

    2012-12-01

    BAHA implantation surgery in a pediatric population is usually done in two-stage surgeries. This study aims to evaluate the safety and possible superiority of the one-stage over the two-stage BAHA implantation and which one would be the best standard of care for our pediatric patients. A retrospective chart review of 55 patients operated in our tertiary care institutions between 2005 and 2010 was conducted. The actual tendency in our institutions, applied at the time of the study, is to perform a one-stage surgery for all operated patients (pediatric and adult), except for patients undergoing translabyrinthine surgeries for cerebellopontine tumor excision. These patients indeed had a two-stage insertion. 26 patients underwent one-stage surgery (group I) while 29 patients had a two-stage (group II) BAHA insertion. A period of 4 months was allowed for osseointegration before BAHA processor fitting. As for the safety assessment of the one-stage surgery, we compared both groups regarding the incidence and severity (minor, moderate and major) of encountered complications, as well as the operating time and follow-up. The operating time of the two-stage surgery includes the time of the first and of the second stage. The mean age at surgery was 8.5 years old for the group I and 50 years old for the group II patients. There was no difference in the incidence of minor (p=0.12), moderate (p=0.41) nor severe (p=0.68) complications between groups I and II. Two cases of traumatic extrusion were noted in the group I. Furthermore, the one-stage BAHA implantation requests a significantly lower operating time (mean: 54 [32-100] min) than the two-stage surgery (mean: 79 [63-148] min) (p=0.012). All pediatric cases of BAHA insertion were performed in a one day surgery. The mean postoperative follow-up was 114 and 96 weeks for groups I and II respectively (p=0.058). One-stage BAHA insertion surgery in the pediatric population is a reliable, safe and efficient therapeutic option that

  11. Forecasting overhaul or replacement intervals based on estimated system failure intensity

    NASA Astrophysics Data System (ADS)

    Gannon, James M.

    1994-12-01

    System reliability can be expressed in terms of the pattern of failure events over time. Assuming a nonhomogeneous Poisson process and Weibull intensity function for complex repairable system failures, the degree of system deterioration can be approximated. Maximum likelihood estimators (MLE's) for the system Rate of Occurrence of Failure (ROCOF) function are presented. Evaluating the integral of the ROCOF over annual usage intervals yields the expected number of annual system failures. By associating a cost of failure with the expected number of failures, budget and program policy decisions can be made based on expected future maintenance costs. Monte Carlo simulation is used to estimate the range and the distribution of the net present value and internal rate of return of alternative cash flows based on the distributions of the cost inputs and confidence intervals of the MLE's.

  12. Area Determination of Diabetic Foot Ulcer Images Using a Cascaded Two-Stage SVM-Based Classification.

    PubMed

    Wang, Lei; Pedersen, Peder C; Agu, Emmanuel; Strong, Diane M; Tulu, Bengisu

    2017-09-01

    The standard chronic wound assessment method based on visual examination is potentially inaccurate and also represents a significant clinical workload. Hence, computer-based systems providing quantitative wound assessment may be valuable for accurately monitoring wound healing status, with the wound area the best suited for automated analysis. Here, we present a novel approach, using support vector machines (SVM) to determine the wound boundaries on foot ulcer images captured with an image capture box, which provides controlled lighting and range. After superpixel segmentation, a cascaded two-stage classifier operates as follows: in the first stage, a set of k binary SVM classifiers are trained and applied to different subsets of the entire training images dataset, and incorrectly classified instances are collected. In the second stage, another binary SVM classifier is trained on the incorrectly classified set. We extracted various color and texture descriptors from superpixels that are used as input for each stage in the classifier training. Specifically, color and bag-of-word representations of local dense scale invariant feature transformation features are descriptors for ruling out irrelevant regions, and color and wavelet-based features are descriptors for distinguishing healthy tissue from wound regions. Finally, the detected wound boundary is refined by applying the conditional random field method. We have implemented the wound classification on a Nexus 5 smartphone platform, except for training which was done offline. Results are compared with other classifiers and show that our approach provides high global performance rates (average sensitivity = 73.3%, specificity = 94.6%) and is sufficiently efficient for a smartphone-based image analysis.

  13. Pulmonary 3 T MRI with ultrashort TEs: influence of ultrashort echo time interval on pulmonary functional and clinical stage assessments of smokers.

    PubMed

    Ohno, Yoshiharu; Nishio, Mizuho; Koyama, Hisanobu; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Seki, Shinichiro; Obara, Makoto; van Cauteren, Marc; Takahashi, Masaya; Sugimura, Kazuro

    2014-04-01

    To assess the influence of ultrashort TE (UTE) intervals on pulmonary magnetic resonance imaging (MRI) with UTEs (UTE-MRI) for pulmonary functional loss assessment and clinical stage classification of smokers. A total 60 consecutive smokers (43 men and 17 women; mean age 70 years) with and without COPD underwent thin-section multidetector row computed tomography (MDCT), UTE-MRI, and pulmonary functional measurements. For each smoker, UTE-MRI was performed with three different UTE intervals (UTE-MRI A: 0.5 msec, UTE-MRI B: 1.0 msec, UTE-MRI C: 1.5 msec). By using the GOLD guidelines, the subjects were classified as: "smokers without COPD," "mild COPD," "moderate COPD," and "severe or very severe COPD." Then the mean T2* value from each UTE-MRI and CT-based functional lung volume (FLV) were correlated with pulmonary function test. Finally, Fisher's PLSD test was used to evaluate differences in each index among the four clinical stages. Each index correlated significantly with pulmonary function test results (P < 0.05). CT-based FLV and mean T2* values obtained from UTE-MRI A and B showed significant differences among all groups except between "smokers without COPD" and "mild COPD" groups (P < 0.05). UTE-MRI has a potential for management of smokers and the UTE interval is suggested as an important parameter in this setting. Copyright © 2013 Wiley Periodicals, Inc.

  14. Two-phase simulation-based location-allocation optimization of biomass storage distribution

    USDA-ARS?s Scientific Manuscript database

    This study presents a two-phase simulation-based framework for finding the optimal locations of biomass storage facilities that is a very critical link on the biomass supply chain, which can help to solve biorefinery concerns (e.g. steady supply, uniform feedstock properties, stable feedstock costs,...

  15. Target tracking system based on preliminary and precise two-stage compound cameras

    NASA Astrophysics Data System (ADS)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  16. Single microparticle launching method using two-stage light-gas gun for simulating hypervelocity impacts of micrometeoroids and space debris.

    PubMed

    Kawai, Nobuaki; Tsurui, Kenji; Hasegawa, Sunao; Sato, Eiichi

    2010-11-01

    A single microparticle launching method is described to simulate the hypervelocity impacts of micrometeoroids and microdebris on space structures at the Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency. A microparticle placed in a sabot with slits is accelerated using a rifled two-stage light-gas gun. The centrifugal force provided by the rifling in the launch tube separates the sabot. The sabot-separation distance and the impact-point deviation are strongly affected by the combination of the sabot diameter and the bore diameter, and by the projectile diameter. Using this method, spherical projectiles of 1.0-0.1 mm diameter were launched at up to 7 km/s.

  17. Single microparticle launching method using two-stage light-gas gun for simulating hypervelocity impacts of micrometeoroids and space debris

    NASA Astrophysics Data System (ADS)

    Kawai, Nobuaki; Tsurui, Kenji; Hasegawa, Sunao; Sato, Eiichi

    2010-11-01

    A single microparticle launching method is described to simulate the hypervelocity impacts of micrometeoroids and microdebris on space structures at the Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency. A microparticle placed in a sabot with slits is accelerated using a rifled two-stage light-gas gun. The centrifugal force provided by the rifling in the launch tube separates the sabot. The sabot-separation distance and the impact-point deviation are strongly affected by the combination of the sabot diameter and the bore diameter, and by the projectile diameter. Using this method, spherical projectiles of 1.0-0.1 mm diameter were launched at up to 7 km/s.

  18. High frequency two-stage pulse tube cryocooler with base temperature below 20 K

    NASA Astrophysics Data System (ADS)

    Yang, L. W.; Thummes, G.

    2005-02-01

    High frequency (30-50 Hz) multi-stage pulse tube coolers that are capable of reaching temperatures close to 20 K or even lower are a subject of recent research and development activities. This paper reports on the design and test of a two-stage pulse tube cooler which is driven by a linear compressor with nominal input power of 200 W at an operating frequency of 30-45 Hz. A parallel configuration of the two pulse tubes is used with the warm ends of the pulse tubes located at ambient temperature. For both stages, the regenerator matrix consists of a stack of stainless steel screen. At an operating frequency of 35 Hz and with the first stage at 73 K a lowest stationary temperature of 19.6 K has been achieved at the second stage. The effects of input power, frequency, average pressure, and cold head orientation on the cooling performance are also reported. An even lower no-load temperature can be expected from the use of lead or other regenerator materials of high heat capacity in the second stage.

  19. Sleep stage classification by body movement index and respiratory interval indices using multiple radar sensors.

    PubMed

    Kagawa, Masayuki; Sasaki, Noriyuki; Suzumura, Kazuki; Matsui, Takemi

    2015-01-01

    Disturbed sleep has become more common in recent years. To increase the quality of sleep, undergoing sleep observation has gained interest as an attempt to resolve possible problems. In this paper, we evaluate a non-restrictive and non-contact method for classifying real-time sleep stages and report on its potential applications. The proposed system measures body movements and respiratory signals of a sleeping person using a multiple 24-GHz microwave radar placed beneath the mattress. We determined a body-movement index to identify wake and sleep periods, and fluctuation indices of respiratory intervals to identify sleep stages. For identifying wake and sleep periods, the rate agreement between the body-movement index and the reference result using the R&K method was 83.5 ± 6.3%. One-minute standard deviations, one of the fluctuation indices of respiratory intervals, had a high degree of contribution and showed a significant difference across the three sleep stages (REM, LIGHT, and DEEP; p <; 0.001). Although the degree that the 5-min fractal dimension contributed-another fluctuation index-was not as high as expected, its difference between REM and DEEP sleep was significant (p <; 0.05). We applied a linear discriminant function to classify wake or sleep periods and to estimate the three sleep stages. The accuracy was 79.3% for classification and 71.9% for estimation. This is a novel system for measuring body movements and body-surface movements that are induced by respiration and for measuring high sensitivity pulse waves using multiple radar signals. This method simplifies measurement of sleep stages and may be employed at nursing care facilities or by the general public to increase sleep quality.

  20. Two stage sorption of sulfur compounds

    DOEpatents

    Moore, William E.

    1992-01-01

    A two stage method for reducing the sulfur content of exhaust gases is disclosed. Alkali- or alkaline-earth-based sorbent is totally or partially vaporized and introduced into a sulfur-containing gas stream. The activated sorbent can be introduced in the reaction zone or the exhaust gases of a combustor or a gasifier. High efficiencies of sulfur removal can be achieved.

  1. Study on general design of dual-DMD based infrared two-band scene simulation system

    NASA Astrophysics Data System (ADS)

    Pan, Yue; Qiao, Yang; Xu, Xi-ping

    2017-02-01

    Mid-wave infrared(MWIR) and long-wave infrared(LWIR) two-band scene simulation system is a kind of testing equipment that used for infrared two-band imaging seeker. Not only it would be qualified for working waveband, but also realize the essence requests that infrared radiation characteristics should correspond to the real scene. Past single-digital micromirror device (DMD) based infrared scene simulation system does not take the huge difference between targets and background radiation into account, and it cannot realize the separated modulation to two-band light beam. Consequently, single-DMD based infrared scene simulation system cannot accurately express the thermal scene model that upper-computer built, and it is not that practical. To solve the problem, we design a dual-DMD based, dual-channel, co-aperture, compact-structure infrared two-band scene simulation system. The operating principle of the system is introduced in detail, and energy transfer process of the hardware-in-the-loop simulation experiment is analyzed as well. Also, it builds the equation about the signal-to-noise ratio of infrared detector in the seeker, directing the system overall design. The general design scheme of system is given, including the creation of infrared scene model, overall control, optical-mechanical structure design and image registration. By analyzing and comparing the past designs, we discuss the arrangement of optical engine framework in the system. According to the main content of working principle and overall design, we summarize each key techniques in the system.

  2. Stage-based interventions for smoking cessation.

    PubMed

    Cahill, Kate; Lancaster, Tim; Green, Natasha

    2010-11-10

    definition of abstinence, and preferred biochemically validated rates where reported. Where appropriate we performed meta-analysis to estimate a pooled risk ratio, using the Mantel-Haenszel fixed-effect model. We found 41 trials (>33,000 participants) which met our inclusion criteria. Four trials, which directly compared the same intervention in stage-based and standard versions, found no clear advantage for the staging component. Stage-based versus standard self-help materials (two trials) gave a relative risk (RR) of 0.93 (95% CI 0.62 to 1.39). Stage-based versus standard counselling (two trials) gave a relative risk of 1.00 (95% CI 0.82 to 1.22). Six trials of stage-based self-help systems versus any standard self-help support demonstrated a benefit for the staged groups, with an RR of 1.27 (95% CI 1.01 to 1.59). Twelve trials comparing stage-based self help with 'usual care' or assessment-only gave an RR of 1.32 (95% CI 1.17 to 1.48). Thirteen trials of stage-based individual counselling versus any control condition gave an RR of 1.24 (95% CI 1.08 to 1.42). These findings are consistent with the proven effectiveness of these interventions in their non-stage-based versions. The evidence was unclear for telephone counselling, interactive computer programmes or training of doctors or lay supporters. This uncertainty may be due in part to smaller numbers of trials. Based on four trials using direct comparisons, stage-based self-help interventions (expert systems and/or tailored materials) and individual counselling were neither more nor less effective than their non-stage-based equivalents. Thirty-one trials of stage-based self help or counselling interventions versus any control condition demonstrated levels of effectiveness which were comparable with their non-stage-based counterparts. Providing these forms of practical support to those trying to quit appears to be more productive than not intervening. However, the additional value of adapting the intervention to the

  3. Assessing Interval Estimation Methods for Hill Model ...

    EPA Pesticide Factsheets

    The Hill model of concentration-response is ubiquitous in toxicology, perhaps because its parameters directly relate to biologically significant metrics of toxicity such as efficacy and potency. Point estimates of these parameters obtained through least squares regression or maximum likelihood are commonly used in high-throughput risk assessment, but such estimates typically fail to include reliable information concerning confidence in (or precision of) the estimates. To address this issue, we examined methods for assessing uncertainty in Hill model parameter estimates derived from concentration-response data. In particular, using a sample of ToxCast concentration-response data sets, we applied four methods for obtaining interval estimates that are based on asymptotic theory, bootstrapping (two varieties), and Bayesian parameter estimation, and then compared the results. These interval estimation methods generally did not agree, so we devised a simulation study to assess their relative performance. We generated simulated data by constructing four statistical error models capable of producing concentration-response data sets comparable to those observed in ToxCast. We then applied the four interval estimation methods to the simulated data and compared the actual coverage of the interval estimates to the nominal coverage (e.g., 95%) in order to quantify performance of each of the methods in a variety of cases (i.e., different values of the true Hill model paramet

  4. A two-stage design for multiple testing in large-scale association studies.

    PubMed

    Wen, Shu-Hui; Tzeng, Jung-Ying; Kao, Jau-Tsuen; Hsiao, Chuhsing Kate

    2006-01-01

    Modern association studies often involve a large number of markers and hence may encounter the problem of testing multiple hypotheses. Traditional procedures are usually over-conservative and with low power to detect mild genetic effects. From the design perspective, we propose a two-stage selection procedure to address this concern. Our main principle is to reduce the total number of tests by removing clearly unassociated markers in the first-stage test. Next, conditional on the findings of the first stage, which uses a less stringent nominal level, a more conservative test is conducted in the second stage using the augmented data and the data from the first stage. Previous studies have suggested using independent samples to avoid inflated errors. However, we found that, after accounting for the dependence between these two samples, the true discovery rate increases substantially. In addition, the cost of genotyping can be greatly reduced via this approach. Results from a study of hypertriglyceridemia and simulations suggest the two-stage method has a higher overall true positive rate (TPR) with a controlled overall false positive rate (FPR) when compared with single-stage approaches. We also report the analytical form of its overall FPR, which may be useful in guiding study design to achieve a high TPR while retaining the desired FPR.

  5. Numerical simulation model of hyperacute/acute stage white matter infarction.

    PubMed

    Sakai, Koji; Yamada, Kei; Oouchi, Hiroyuki; Nishimura, Tsunehiko

    2008-01-01

    Although previous studies have revealed the mechanisms of changes in diffusivity (apparent diffusion coefficient [ADC]) in acute brain infarction, changes in diffusion anisotropy (fractional anisotropy [FA]) in white matter have not been examined. We hypothesized that membrane permeability as well as axonal swelling play important roles, and we therefore constructed a simulation model using random walk simulation to replicate the diffusion of water molecules. We implemented a numerical diffusion simulation model of normal and infarcted human brains using C++ language. We constructed this 2-pool model using simple tubes aligned in a single direction. Random walk simulation diffused water. Axon diameters and membrane permeability were then altered in step-wise fashion. To estimate the effects of axonal swelling, axon diameters were changed from 6 to 10 microm. Membrane permeability was altered from 0% to 40%. Finally, both elements were combined to explain increasing FA in the hyperacute stage of white matter infarction. The simulation demonstrated that simple water shift into the intracellular space reduces ADC and increases FA, but not to the extent expected from actual human cases (ADC approximately 50%; FA approximately +20%). Similarly, membrane permeability alone was insufficient to explain this phenomenon. However, a combination of both factors successfully replicated changes in diffusivity indices. Both axonal swelling and reduced membrane permeability appear important in explaining changes in ADC and FA based on eigenvalues in hyperacute-stage white matter infarction.

  6. Adjusting for treatment switching in randomised controlled trials - A simulation study and a simplified two-stage method.

    PubMed

    Latimer, Nicholas R; Abrams, K R; Lambert, P C; Crowther, M J; Wailoo, A J; Morden, J P; Akehurst, R L; Campbell, M J

    2017-04-01

    Estimates of the overall survival benefit of new cancer treatments are often confounded by treatment switching in randomised controlled trials (RCTs) - whereby patients randomised to the control group are permitted to switch onto the experimental treatment upon disease progression. In health technology assessment, estimates of the unconfounded overall survival benefit associated with the new treatment are needed. Several switching adjustment methods have been advocated in the literature, some of which have been used in health technology assessment. However, it is unclear which methods are likely to produce least bias in realistic RCT-based scenarios. We simulated RCTs in which switching, associated with patient prognosis, was permitted. Treatment effect size and time dependency, switching proportions and disease severity were varied across scenarios. We assessed the performance of alternative adjustment methods based upon bias, coverage and mean squared error, related to the estimation of true restricted mean survival in the absence of switching in the control group. We found that when the treatment effect was not time-dependent, rank preserving structural failure time models (RPSFTM) and iterative parameter estimation methods produced low levels of bias. However, in the presence of a time-dependent treatment effect, these methods produced higher levels of bias, similar to those produced by an inverse probability of censoring weights method. The inverse probability of censoring weights and structural nested models produced high levels of bias when switching proportions exceeded 85%. A simplified two-stage Weibull method produced low bias across all scenarios and provided the treatment switching mechanism is suitable, represents an appropriate adjustment method.

  7. Two-Stage Series-Resonant Inverter

    NASA Technical Reports Server (NTRS)

    Stuart, Thomas A.

    1994-01-01

    Two-stage inverter includes variable-frequency, voltage-regulating first stage and fixed-frequency second stage. Lightweight circuit provides regulated power and is invulnerable to output short circuits. Does not require large capacitor across ac bus, like parallel resonant designs. Particularly suitable for use in ac-power-distribution system of aircraft.

  8. Does cemented or cementless single-stage exchange arthroplasty of chronic periprosthetic hip infections provide similar infection rates to a two-stage? A systematic review.

    PubMed

    George, D A; Logoluso, N; Castellini, G; Gianola, S; Scarponi, S; Haddad, F S; Drago, L; Romano, C L

    2016-10-10

    The best surgical modality for treating chronic periprosthetic hip infections remains controversial, with a lack of randomised controlled studies. The aim of this systematic review is to compare the infection recurrence rate after a single-stage versus a two-stage exchange arthroplasty, and the rate of cemented versus cementless single-stage exchange arthroplasty for chronic periprosthetic hip infections. We searched for eligible studies published up to December 2015. Full text or abstract in English were reviewed. We included studies reporting the infection recurrence rate as the outcome of interest following single- or two-stage exchange arthroplasty, or both, with a minimum follow-up of 12 months. Two reviewers independently abstracted data and appraised quality assessment. After study selection, 90 observational studies were included. The majority of studies were focused on a two-stage hip exchange arthroplasty (65 %), 18 % on a single-stage exchange, and only a 17 % were comparative studies. There was no statistically significant difference between a single-stage versus a two-stage exchange in terms of recurrence of infection in controlled studies (pooled odds ratio of 1.37 [95 % CI = 0.68-2.74, I 2  = 45.5 %]). Similarly, the recurrence infection rate in cementless versus cemented single-stage hip exchanges failed to demonstrate a significant difference, due to the substantial heterogeneity among the studies. Despite the methodological limitations and the heterogeneity between single cohorts studies, if we considered only the available controlled studies no superiority was demonstrated between a single- and two-stage exchange at a minimum of 12 months follow-up. The overalapping of confidence intervals related to single-stage cementless and cemented hip exchanges, showed no superiority of either technique.

  9. Finite element simulation of ultrasonic waves in corroded reinforced concrete for early-stage corrosion detection

    NASA Astrophysics Data System (ADS)

    Tang, Qixiang; Yu, Tzuyang

    2017-04-01

    In reinforced concrete (RC) structures, corrosion of steel rebar introduces internal stress at the interface between rebar and concrete, ultimately leading to debonding and separation between rebar and concrete. Effective early-stage detection of steel rebar corrosion can significantly reduce maintenance costs and enable early-stage repair. In this paper, ultrasonic detection of early-stage steel rebar corrosion inside concrete is numerically investigated using the finite element method (FEM). Commercial FEM software (ABAQUS) was used in all simulation cases. Steel rebar was simplified and modeled by a cylindrical structure. 1MHz ultrasonic elastic waves were generated at the interface between rebar and concrete. Two-dimensional plain strain element was adopted in all FE models. Formation of surface rust in rebar was modeled by changing material properties and expanding element size in order to simulate the rust interface between rebar and concrete and the presence of interfacial stress. Two types of surface rust (corroded regions) were considered. Time domain and frequency domain responses of displacement were studied. From our simulation result, two corrosion indicators, baseline (b) and center frequency (fc) were proposed for detecting and quantifying corrosion.

  10. Dominance-based ranking functions for interval-valued intuitionistic fuzzy sets.

    PubMed

    Chen, Liang-Hsuan; Tu, Chien-Cheng

    2014-08-01

    The ranking of interval-valued intuitionistic fuzzy sets (IvIFSs) is difficult since they include the interval values of membership and nonmembership. This paper proposes ranking functions for IvIFSs based on the dominance concept. The proposed ranking functions consider the degree to which an IvIFS dominates and is not dominated by other IvIFSs. Based on the bivariate framework and the dominance concept, the functions incorporate not only the boundary values of membership and nonmembership, but also the relative relations among IvIFSs in comparisons. The dominance-based ranking functions include bipolar evaluations with a parameter that allows the decision-maker to reflect his actual attitude in allocating the various kinds of dominance. The relationship for two IvIFSs that satisfy the dual couple is defined based on four proposed ranking functions. Importantly, the proposed ranking functions can achieve a full ranking for all IvIFSs. Two examples are used to demonstrate the applicability and distinctiveness of the proposed ranking functions.

  11. Effect of ammoniacal nitrogen on one-stage and two-stage anaerobic digestion of food waste

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ariunbaatar, Javkhlan, E-mail: jaka@unicas.it; UNESCO-IHE Institute for Water Education, Westvest 7, 2611 AX Delft; Scotto Di Perta, Ester

    Highlights: • Almost 100% of the biomethane potential of food waste was recovered during AD in a two-stage CSTR. • Recirculation of the liquid fraction of the digestate provided the necessary buffer in the AD reactors. • A higher OLR (0.9 gVS/L·d) led to higher accumulation of TAN, which caused more toxicity. • A two-stage reactor is more sensitive to elevated concentrations of ammonia. • The IC{sub 50} of TAN for the AD of food waste amounts to 3.8 g/L. - Abstract: This research compares the operation of one-stage and two-stage anaerobic continuously stirred tank reactor (CSTR) systems fed semi-continuouslymore » with food waste. The main purpose was to investigate the effects of ammoniacal nitrogen on the anaerobic digestion process. The two-stage system gave more reliable operation compared to one-stage due to: (i) a better pH self-adjusting capacity; (ii) a higher resistance to organic loading shocks; and (iii) a higher conversion rate of organic substrate to biomethane. Also a small amount of biohydrogen was detected from the first stage of the two-stage reactor making this system attractive for biohythane production. As the digestate contains ammoniacal nitrogen, re-circulating it provided the necessary alkalinity in the systems, thus preventing an eventual failure by volatile fatty acids (VFA) accumulation. However, re-circulation also resulted in an ammonium accumulation, yielding a lower biomethane production. Based on the batch experimental results the 50% inhibitory concentration of total ammoniacal nitrogen on the methanogenic activities was calculated as 3.8 g/L, corresponding to 146 mg/L free ammonia for the inoculum used for this research. The two-stage system was affected by the inhibition more than the one-stage system, as it requires less alkalinity and the physically separated methanogens are more sensitive to inhibitory factors, such as ammonium and propionic acid.« less

  12. On the appropriateness of applying chi-square distribution based confidence intervals to spectral estimates of helicopter flyover data

    NASA Technical Reports Server (NTRS)

    Rutledge, Charles K.

    1988-01-01

    The validity of applying chi-square based confidence intervals to far-field acoustic flyover spectral estimates was investigated. Simulated data, using a Kendall series and experimental acoustic data from the NASA/McDonnell Douglas 500E acoustics test, were analyzed. Statistical significance tests to determine the equality of distributions of the simulated and experimental data relative to theoretical chi-square distributions were performed. Bias and uncertainty errors associated with the spectral estimates were easily identified from the data sets. A model relating the uncertainty and bias errors to the estimates resulted, which aided in determining the appropriateness of the chi-square distribution based confidence intervals. Such confidence intervals were appropriate for nontonally associated frequencies of the experimental data but were inappropriate for tonally associated estimate distributions. The appropriateness at the tonally associated frequencies was indicated by the presence of bias error and noncomformity of the distributions to the theoretical chi-square distribution. A technique for determining appropriate confidence intervals at the tonally associated frequencies was suggested.

  13. European vegetation during Marine Oxygen Isotope Stage-3

    NASA Astrophysics Data System (ADS)

    Huntley, Brian; Alfano, Mary J. o.; Allen, Judy R. M.; Pollard, Dave; Tzedakis, Polychronis C.; de Beaulieu, Jacques-Louis; Grüger, Eberhard; Watts, Bill

    2003-03-01

    European vegetation during representative "warm" and "cold" intervals of stage-3 was inferred from pollen analytical data. The inferred vegetation differs in character and spatial pattern from that of both fully glacial and fully interglacial conditions and exhibits contrasts between warm and cold intervals, consistent with other evidence for stage-3 palaeoenvironmental fluctuations. European vegetation thus appears to have been an integral component of millennial environmental fluctuations during stage-3; vegetation responded to this scale of environmental change and through feedback mechanisms may have had effects upon the environment. The pollen-inferred vegetation was compared with vegetation simulated using the BIOME 3.5 vegetation model for climatic conditions simulated using a regional climate model (RegCM2) nested within a coupled global climate and vegetation model (GENESIS-BIOME). Despite some discrepancies in detail, both approaches capture the principal features of the present vegetation of Europe. The simulated vegetation for stage-3 differs markedly from that inferred from pollen analytical data, implying substantial discrepancy between the simulated climate and that actually prevailing. Sensitivity analyses indicate that the simulated climate is too warm and probably has too short a winter season. These discrepancies may reflect incorrect specification of sea surface temperature or sea-ice conditions and may be exacerbated by vegetation-climate feedback in the coupled global model.

  14. Optimal two-stage enrichment design correcting for biomarker misclassification.

    PubMed

    Zang, Yong; Guo, Beibei

    2018-01-01

    The enrichment design is an important clinical trial design to detect the treatment effect of the molecularly targeted agent (MTA) in personalized medicine. Under this design, patients are stratified into marker-positive and marker-negative subgroups based on their biomarker statuses and only the marker-positive patients are enrolled into the trial and randomized to receive either the MTA or a standard treatment. As the biomarker plays a key role in determining the enrollment of the trial, a misclassification of the biomarker can induce substantial bias, undermine the integrity of the trial, and seriously affect the treatment evaluation. In this paper, we propose a two-stage optimal enrichment design that utilizes the surrogate marker to correct for the biomarker misclassification. The proposed design is optimal in the sense that it maximizes the probability of correctly classifying each patient's biomarker status based on the surrogate marker information. In addition, after analytically deriving the bias caused by the biomarker misclassification, we develop a likelihood ratio test based on the EM algorithm to correct for such bias. We conduct comprehensive simulation studies to investigate the operating characteristics of the optimal design and the results confirm the desirable performance of the proposed design.

  15. Auditorium acoustics evaluation based on simulated impulse response

    NASA Astrophysics Data System (ADS)

    Wu, Shuoxian; Wang, Hongwei; Zhao, Yuezhe

    2004-05-01

    The impulse responses and other acoustical parameters of Huangpu Teenager Palace in Guangzhou were measured. Meanwhile, the acoustical simulation and auralization based on software ODEON were also made. The comparison between the parameters based on computer simulation and measuring is given. This case study shows that auralization technique based on computer simulation can be used for predicting the acoustical quality of a hall at its design stage.

  16. TIME-INTERVAL MEASURING DEVICE

    DOEpatents

    Gross, J.E.

    1958-04-15

    An electronic device for measuring the time interval between two control pulses is presented. The device incorporates part of a previous approach for time measurement, in that pulses from a constant-frequency oscillator are counted during the interval between the control pulses. To reduce the possible error in counting caused by the operation of the counter gating circuit at various points in the pulse cycle, the described device provides means for successively delaying the pulses for a fraction of the pulse period so that a final delay of one period is obtained and means for counting the pulses before and after each stage of delay during the time interval whereby a plurality of totals is obtained which may be averaged and multplied by the pulse period to obtain an accurate time- Interval measurement.

  17. Two stage launch vehicle

    NASA Technical Reports Server (NTRS)

    1987-01-01

    The Advanced Space Design project for 1986-87 was the design of a two stage launch vehicle, representing a second generation space transportation system (STS) which will be needed to support the space station. The first stage is an unmanned winged booster which is fully reusable with a fly back capability. It has jet engines so that it can fly back to the landing site. This adds safety as well as the flexibility to choose alternate landing sites. There are two different second stages. One of the second stages is a manned advanced space shuttle called Space Shuttle II. Space Shuttle II has a payload capability of delivering 40,000 pounds to the space station in low Earth orbit (LEO), and returning 40,000 pounds to Earth. Servicing the space station makes the ability to return a heavy payload to Earth as important as being able to launch a heavy payload. The other second stage is an unmanned heavy lift cargo vehicle with ability to deliver 150,000 pounds of payload to LEO. This vehicle will not return to Earth; however, the engines and electronics can be removed and returned to Earth in the Space Shuttle II. The rest of the vehicle can then be used on orbit for storage or raw materials, supplies, and space manufactured items awaiting transport back to Earth.

  18. Use of real time gas production data for more accurate comparison of continuous single-stage and two-stage fermentation.

    PubMed

    Massanet-Nicolau, Jaime; Dinsdale, Richard; Guwy, Alan; Shipley, Gary

    2013-02-01

    Changes in fermenter gas composition within a given 24h period can cause severe bias in calculations of biogas or energy yields based on just one or two measurements of gas composition per day, as is common in other studies of two-stage fermentation. To overcome this bias, real time recording of gas composition and production were used to undertake a detailed and controlled comparison of single-stage and two-stage fermentation using a real world substrate (wheat feed pellets). When a two-stage fermentation system was used, methane yields increased from 261 L kg(-1)VS using a 20 day HRT, single-stage fermentation, to 359 L kg(-1) VS using a two-stage fermentation with the same overall retention time--an increase of 37%. Additionally a hydrogen yield of 7 L kg(-1) VS was obtained when two-stage fermentation was used. The two-stage system could also be operated at a shorter, 12 day HRT and still produce higher methane yields (306 L kg(-1) VS). Both two-stage fermentation systems evaluated exhibited methane yields in excess of that predicted by a biological methane potential test (BMP) performed using the same feedstock (260 L kg(-1)VS). Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Downstream processing of antibodies: single-stage versus multi-stage aqueous two-phase extraction.

    PubMed

    Rosa, P A J; Azevedo, A M; Ferreira, I F; Sommerfeld, S; Bäcker, W; Aires-Barros, M R

    2009-12-11

    Single-stage and multi-stage strategies have been evaluated and compared for the purification of human antibodies using liquid-liquid extraction in aqueous two-phase systems (ATPSs) composed of polyethylene glycol 3350 (PEG 3350), dextran, and triethylene glycol diglutaric acid (TEG-COOH). The performance of single-stage extraction systems was firstly investigated by studying the effect of pH, TEG-COOH concentration and volume ratio on the partitioning of the different components of a Chinese hamster ovary (CHO) cells supernatant. It was observed that lower pH values and high TEG-COOH concentrations favoured the selective extraction of human immunoglobulin G (IgG) to the PEG-rich phase. Higher recovery yields, purities and percentage of contaminants removal were always achieved in the presence of the ligand, TEG-COOH. The extraction of IgG could be enhanced using higher volume ratios, however with a significant decrease in both purity and percentage of contaminants removal. The best single-stage extraction conditions were achieved for an ATPS containing 1.3% (w/w) TEG-COOH with a volume ratio of 2.2, which allowed the recovery of 96% of IgG in the PEG-rich phase with a final IgG concentration of 0.21mg/mL, a protein purity of 87% and a total purity of 43%. In order to enhance simultaneously both recovery yield and purity, a four stage cross-current operation was simulated and the corresponding liquid-liquid equilibrium (LLE) data determined. A predicted optimised scheme of a counter-current multi-stage aqueous two-phase extraction was hence described. IgG can be purified in the PEG-rich top phase with a final recovery yield of 95%, a final concentration of 1.04mg/mL and a protein purity of 93%, if a PEG/dextran ATPS containing 1.3% (w/w) TEG-COOH, 5 stages and volume ratio of 0.4 are used. Moreover, according to the LLE data of all CHO cells supernatant components, it was possible to observe that most of the cells supernatant contaminants can be removed during this

  20. Two stage to orbit design

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A preliminary design of a two-stage to orbit vehicle was conducted with the requirements to carry a 10,000 pound payload into a 300 mile low-earth orbit using an airbreathing first stage, and to take off and land unassisted on a 15,000 foot runway. The goal of the design analysis was to produce the most efficient vehicle in size and weight which could accomplish the mission requirements. Initial parametric analysis indicated that the weight of the orbiter and the transonic performance of the system were the two parameters that had the largest impact on the design. The resulting system uses a turbofan ramjet powered first stage to propel a scramjet and rocket powered orbiter to the stage point of Mach 6 to 6.5 at an altitude of 90,000 ft.

  1. Two-stage solar concentrators based on parabolic troughs: asymmetric versus symmetric designs.

    PubMed

    Schmitz, Max; Cooper, Thomas; Ambrosetti, Gianluca; Steinfeld, Aldo

    2015-11-20

    While nonimaging concentrators can approach the thermodynamic limit of concentration, they generally suffer from poor compactness when designed for small acceptance angles, e.g., to capture direct solar irradiation. Symmetric two-stage systems utilizing an image-forming primary parabolic concentrator in tandem with a nonimaging secondary concentrator partially overcome this compactness problem, but their achievable concentration ratio is ultimately limited by the central obstruction caused by the secondary. Significant improvements can be realized by two-stage systems having asymmetric cross-sections, particularly for 2D line-focus trough designs. We therefore present a detailed analysis of two-stage line-focus asymmetric concentrators for flat receiver geometries and compare them to their symmetric counterparts. Exemplary designs are examined in terms of the key optical performance metrics, namely, geometric concentration ratio, acceptance angle, concentration-acceptance product, aspect ratio, active area fraction, and average number of reflections. Notably, we show that asymmetric designs can achieve significantly higher overall concentrations and are always more compact than symmetric systems designed for the same concentration ratio. Using this analysis as a basis, we develop novel asymmetric designs, including two-wing and nested configurations, which surpass the optical performance of two-mirror aplanats and are comparable with the best reported 2D simultaneous multiple surface designs for both hollow and dielectric-filled secondaries.

  2. A Compact Two-Stage 120 W GaN High Power Amplifier for SweepSAR Radar Systems

    NASA Technical Reports Server (NTRS)

    Thrivikraman, Tushar; Horst, Stephen; Price, Douglas; Hoffman, James; Veilleux, Louise

    2014-01-01

    This work presents the design and measured results of a fully integrated switched power two-stage GaN HEMT high-power amplifier (HPA) achieving 60% power-added efficiency at over 120Woutput power. This high-efficiency GaN HEMT HPA is an enabling technology for L-band SweepSAR interferometric instruments that enable frequent repeat intervals and high-resolution imagery. The L-band HPA was designed using space-qualified state-of-the-art GaN HEMT technology. The amplifier exhibits over 34 dB of power gain at 51 dBm of output power across an 80 MHz bandwidth. The HPA is divided into two stages, an 8 W driver stage and 120 W output stage. The amplifier is designed for pulsed operation, with a high-speed DC drain switch operating at the pulsed-repetition interval and settles within 200 ns. In addition to the electrical design, a thermally optimized package was designed, that allows for direct thermal radiation to maintain low-junction temperatures for the GaN parts maximizing long-term reliability. Lastly, real radar waveforms are characterized and analysis of amplitude and phase stability over temperature demonstrate ultra-stable operation over temperature using integrated bias compensation circuitry allowing less than 0.2 dB amplitude variation and 2 deg phase variation over a 70 C range.

  3. Hyper-X Stage Separation: Simulation Development and Results

    NASA Technical Reports Server (NTRS)

    Reubush, David E.; Martin, John G.; Robinson, Jeffrey S.; Bose, David M.; Strovers, Brian K.

    2001-01-01

    This paper provides an overview of stage separation simulation development and results for NASA's Hyper-X program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an account of the development of the current 14 degree of freedom stage separation simulation tool (SepSim) and results from use of the tool in a Monte Carlo analysis to evaluate the risk of failure for the separation event. Results from use of the tool show that there is only a very small risk of failure in the separation event.

  4. Preharvest Interval Periods and their relation to fruit growth stages and pesticide formulations.

    PubMed

    Alister, Claudio; Araya, Manuel; Becerra, Kevin; Saavedra, Jorge; Kogan, Marcelo

    2017-04-15

    The aim of this study was to evaluate the effect of pesticide formulations and fruit growth stages on the Pre-harvest Interval Period (PHI). Results showed that pesticide formulations did not affect the initial deposit and dissipation rate. However, the fruit growth stage at the application time showed a significant effect on the above-mentioned parameters. Fruit diameter increases in one millimeter pesticide dissipation rates were reduced in -0.033mgkg -1 day -1 (R 2 =0.87; p<0.001) for grapes and -0.014mgkg -1 day -1 (R 2 =0.85; p<0.001) for apples. The relation between solar radiation, air humidity and temperature, and pesticide dissipation rates were dependent on fruit type. PHI could change according to the application time, because of the initial amount of pesticide deposit in the fruits and change in the dissipation rates. Because Maximum Residue Level are becoming more restrictive, it is more important to consider the fruit growth stage effects on pesticide when performing dissipation studies to define PHI. Copyright © 2016. Published by Elsevier Ltd.

  5. Empirical likelihood-based confidence intervals for the sensitivity of a continuous-scale diagnostic test at a fixed level of specificity.

    PubMed

    Gengsheng Qin; Davis, Angela E; Jing, Bing-Yi

    2011-06-01

    For a continuous-scale diagnostic test, it is often of interest to find the range of the sensitivity of the test at the cut-off that yields a desired specificity. In this article, we first define a profile empirical likelihood ratio for the sensitivity of a continuous-scale diagnostic test and show that its limiting distribution is a scaled chi-square distribution. We then propose two new empirical likelihood-based confidence intervals for the sensitivity of the test at a fixed level of specificity by using the scaled chi-square distribution. Simulation studies are conducted to compare the finite sample performance of the newly proposed intervals with the existing intervals for the sensitivity in terms of coverage probability. A real example is used to illustrate the application of the recommended methods.

  6. Two-stage effects of awareness cascade on epidemic spreading in multiplex networks

    NASA Astrophysics Data System (ADS)

    Guo, Quantong; Jiang, Xin; Lei, Yanjun; Li, Meng; Ma, Yifang; Zheng, Zhiming

    2015-01-01

    Human awareness plays an important role in the spread of infectious diseases and the control of propagation patterns. The dynamic process with human awareness is called awareness cascade, during which individuals exhibit herd-like behavior because they are making decisions based on the actions of other individuals [Borge-Holthoefer et al., J. Complex Networks 1, 3 (2013), 10.1093/comnet/cnt006]. In this paper, to investigate the epidemic spreading with awareness cascade, we propose a local awareness controlled contagion spreading model on multiplex networks. By theoretical analysis using a microscopic Markov chain approach and numerical simulations, we find the emergence of an abrupt transition of epidemic threshold βc with the local awareness ratio α approximating 0.5 , which induces two-stage effects on epidemic threshold and the final epidemic size. These findings indicate that the increase of α can accelerate the outbreak of epidemics. Furthermore, a simple 1D lattice model is investigated to illustrate the two-stage-like sharp transition at αc≈0.5 . The results can give us a better understanding of why some epidemics cannot break out in reality and also provide a potential access to suppressing and controlling the awareness cascading systems.

  7. Two-stage revision of septic knee prosthesis with articulating knee spacers yields better infection eradication rate than one-stage or two-stage revision with static spacers.

    PubMed

    Romanò, C L; Gala, L; Logoluso, N; Romanò, D; Drago, L

    2012-12-01

    The best method for treating chronic periprosthetic knee infection remains controversial. Randomized, comparative studies on treatment modalities are lacking. This systematic review of the literature compares the infection eradication rate after two-stage versus one-stage revision and static versus articulating spacers in two-stage procedures. We reviewed full-text papers and those with an abstract in English published from 1966 through 2011 that reported the success rate of infection eradication after one-stage or two-stage revision with two different types of spacers. In all, 6 original articles reporting the results after one-stage knee exchange arthoplasty (n = 204) and 38 papers reporting on two-stage revision (n = 1,421) were reviewed. The average success rate in the eradication of infection was 89.8% after a two-stage revision and 81.9% after a one-stage procedure at a mean follow-up of 44.7 and 40.7 months, respectively. The average infection eradication rate after a two-stage procedure was slightly, although significantly, higher when an articulating spacer rather than a static spacer was used (91.2 versus 87%). The methodological limitations of this study and the heterogeneous material in the studies reviewed notwithstanding, this systematic review shows that, on average, a two-stage procedure is associated with a higher rate of eradication of infection than one-stage revision for septic knee prosthesis and that articulating spacers are associated with a lower recurrence of infection than static spacers at a comparable mean duration of follow-up. IV.

  8. Advanced two-stage compressor program design of inlet stage

    NASA Technical Reports Server (NTRS)

    Bryce, C. A.; Paine, C. J.; Mccutcheon, A. R. S.; Tu, R. K.; Perrone, G. L.

    1973-01-01

    The aerodynamic design of an inlet stage for a two-stage, 10/1 pressure ratio, 2 lb/sec flow rate compressor is discussed. Initially a performance comparison was conducted for an axial, mixed flow and centrifugal second stage. A modified mixed flow configuration with tandem rotors and tandem stators was selected for the inlet stage. The term conical flow compressor was coined to describe a particular type of mixed flow compressor configuration which utilizes axial flow type blading and an increase in radius to increase the work input potential. Design details of the conical flow compressor are described.

  9. Enhanced nonlinearity interval mapping scheme for high-performance simulation-optimization of watershed-scale BMP placement

    NASA Astrophysics Data System (ADS)

    Zou, Rui; Riverson, John; Liu, Yong; Murphy, Ryan; Sim, Youn

    2015-03-01

    Integrated continuous simulation-optimization models can be effective predictors of a process-based responses for cost-benefit optimization of best management practices (BMPs) selection and placement. However, practical application of simulation-optimization model is computationally prohibitive for large-scale systems. This study proposes an enhanced Nonlinearity Interval Mapping Scheme (NIMS) to solve large-scale watershed simulation-optimization problems several orders of magnitude faster than other commonly used algorithms. An efficient interval response coefficient (IRC) derivation method was incorporated into the NIMS framework to overcome a computational bottleneck. The proposed algorithm was evaluated using a case study watershed in the Los Angeles County Flood Control District. Using a continuous simulation watershed/stream-transport model, Loading Simulation Program in C++ (LSPC), three nested in-stream compliance points (CP)—each with multiple Total Maximum Daily Loads (TMDL) targets—were selected to derive optimal treatment levels for each of the 28 subwatersheds, so that the TMDL targets at all the CP were met with the lowest possible BMP implementation cost. Genetic Algorithm (GA) and NIMS were both applied and compared. The results showed that the NIMS took 11 iterations (about 11 min) to complete with the resulting optimal solution having a total cost of 67.2 million, while each of the multiple GA executions took 21-38 days to reach near optimal solutions. The best solution obtained among all the GA executions compared had a minimized cost of 67.7 million—marginally higher, but approximately equal to that of the NIMS solution. The results highlight the utility for decision making in large-scale watershed simulation-optimization formulations.

  10. Interactive two-stage stochastic fuzzy programming for water resources management.

    PubMed

    Wang, S; Huang, G H

    2011-08-01

    In this study, an interactive two-stage stochastic fuzzy programming (ITSFP) approach has been developed through incorporating an interactive fuzzy resolution (IFR) method within an inexact two-stage stochastic programming (ITSP) framework. ITSFP can not only tackle dual uncertainties presented as fuzzy boundary intervals that exist in the objective function and the left- and right-hand sides of constraints, but also permit in-depth analyses of various policy scenarios that are associated with different levels of economic penalties when the promised policy targets are violated. A management problem in terms of water resources allocation has been studied to illustrate applicability of the proposed approach. The results indicate that a set of solutions under different feasibility degrees has been generated for planning the water resources allocation. They can help the decision makers (DMs) to conduct in-depth analyses of tradeoffs between economic efficiency and constraint-violation risk, as well as enable them to identify, in an interactive way, a desired compromise between satisfaction degree of the goal and feasibility of the constraints (i.e., risk of constraint violation). Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Evaluation of the effect of one stage versus two stage full mouth disinfection on C-reactive protein and leucocyte count in patients with chronic periodontitis.

    PubMed

    Pabolu, Chandra Mohan; Mutthineni, Ramesh Babu; Chintala, Srikanth; Naheeda; Mutthineni, Navya

    2013-07-01

    Conventional non-surgical periodontal therapy is carried out in quadrant basis with 1-2 week interval. This time lag may result in re-infection of instrumented pocket and may impair healing. Therefore, a new approach to full-mouth non-surgical therapy to be completed within two consecutive days with full-mouth disinfection has been suggested. In periodontitis, leukocyte counts and levels of C-reactive protein (CRP) are likely to be slightly elevated, indicating the presence of infection or inflammation. The aim of this study is to compare the efficacy of one stage and two stage non-surgical therapy on clinical parameters along with CRP levels and total white blood cell (TWBC) count. A total of 20 patients were selected and were divided into two groups. Group 1 received one stage full mouth dis-infection and Group 2 received two stages FMD. Plaque index, sulcus bleeding index, probing depth, clinical attachment loss, serum CRP and TWBC count were evaluated for both the groups at baseline and at 1 month post-treatment. The results were analyzed using the Student t-test. Both treatment modalities lead to a significant improvement of the clinical and hematological parameters; however comparison between the two groups showed no significant difference after 1 month. The therapeutic intervention may have a systemic effect on blood count in periodontitis patients. Though one stage FMD had limited benefits over two stages FMD, the therapy can be accomplished in a shorter duration.

  12. Increasing accuracy in the interval analysis by the improved format of interval extension based on the first order Taylor series

    NASA Astrophysics Data System (ADS)

    Li, Yi; Xu, Yan Long

    2018-05-01

    When the dependence of the function on uncertain variables is non-monotonic in interval, the interval of function obtained by the classic interval extension based on the first order Taylor series will exhibit significant errors. In order to reduce theses errors, the improved format of the interval extension with the first order Taylor series is developed here considering the monotonicity of function. Two typical mathematic examples are given to illustrate this methodology. The vibration of a beam with lumped masses is studied to demonstrate the usefulness of this method in the practical application, and the necessary input data of which are only the function value at the central point of interval, sensitivity and deviation of function. The results of above examples show that the interval of function from the method developed by this paper is more accurate than the ones obtained by the classic method.

  13. Confidence intervals and sample size calculations for the standardized mean difference effect size between two normal populations under heteroscedasticity.

    PubMed

    Shieh, G

    2013-12-01

    The use of effect sizes and associated confidence intervals in all empirical research has been strongly emphasized by journal publication guidelines. To help advance theory and practice in the social sciences, this article describes an improved procedure for constructing confidence intervals of the standardized mean difference effect size between two independent normal populations with unknown and possibly unequal variances. The presented approach has advantages over the existing formula in both theoretical justification and computational simplicity. In addition, simulation results show that the suggested one- and two-sided confidence intervals are more accurate in achieving the nominal coverage probability. The proposed estimation method provides a feasible alternative to the most commonly used measure of Cohen's d and the corresponding interval procedure when the assumption of homogeneous variances is not tenable. To further improve the potential applicability of the suggested methodology, the sample size procedures for precise interval estimation of the standardized mean difference are also delineated. The desired precision of a confidence interval is assessed with respect to the control of expected width and to the assurance probability of interval width within a designated value. Supplementary computer programs are developed to aid in the usefulness and implementation of the introduced techniques.

  14. The String Stability of a Trajectory-Based Interval Management Algorithm in the Midterm Airspace

    NASA Technical Reports Server (NTRS)

    Swieringa, Kurt A.

    2015-01-01

    NASA's first Air Traffic Management (ATM) Technology Demonstration (ATD-1) was created to facilitate the transition of mature ATM technologies from the laboratory to operational use. The technologies selected for demonstration are the Traffic Management Advisor with Terminal Metering (TMA-TM), which provides precise time-based scheduling in the terminal airspace; Controller Managed Spacing (CMS), which provides terminal controllers with decision support tools enabling precise schedule conformance; and Interval Management (IM), which consists of flight deck automation that enables aircraft to achieve or maintain a precise spacing interval behind a target aircraft. As the percentage of IM equipped aircraft increases, controllers may provide IM clearances to sequences, or strings, of IM-equipped aircraft. It is important for these strings to maintain stable performance. This paper describes an analytic analysis of the string stability of the latest version of NASA's IM algorithm and a fast-time simulation designed to characterize the string performance of the IM algorithm. The analytic analysis showed that the spacing algorithm has stable poles, indicating that a spacing error perturbation will be reduced as a function of string position. The fast-time simulation investigated IM operations at two airports using constraints associated with the midterm airspace, including limited information of the target aircraft's intended speed profile and limited information of the wind forecast on the target aircraft's route. The results of the fast-time simulation demonstrated that the performance of the spacing algorithm is acceptable for strings of moderate length; however, there is some degradation in IM performance as a function of string position.

  15. Comparison of Alaskan Flood Stages: Annual Exceedance Probability vs. Impact Based Stages and Recommendations for the Future

    NASA Astrophysics Data System (ADS)

    Anderson, B. J.

    2016-12-01

    The Alaska River Forecasting Center (APRFC) issues water level forecasts that are used in conjunction with established flood stages to provide flood warning and advisory information to the public. The APRFC typically establishes flood stages based on observed impacts but Alaska has sparse empirical data (e.g., few impact surveys). Thus service hydrologists in Alaska use flood frequency analysis (LP3 distribution) to estimate flood stages from annual exceedance probabilities (AEPs) (Curran et al, 2016). Previously, the APRFC has maintained that bankfull stage corresponds to the 50% AEP, minor to 10-20% AEP, moderate to 2.5-7% AEP, and major to 1-2% AEP, but we now need to statistically verify this relationship. Our objective is therefore to validate the relationship between flood stages and stage associated with the 50, 20, 10, 4, 2, 1, 0.2, and 0.5 AEPs to provide recommendations for improved flood forecasting. We studied the relationship between AEP and flood stage for all gages (56) used by the APRFC that had rating curves not older than 3 years, flood stages based on observed impacts, and at least 10 years of peak annual stage data. The analysis found relatively strong relationships for all flood stages, except for bankfull stage, but with some differences when compared to the traditionally referenced relationship. Major flood stage appears to be most similar to the 1-.2% AEP (100-500 year RI) while moderate flood stage best fits the 2-4% AEP (25-50 year interval). Gages showing a difference in stage of 2 ft or greater exhibited this difference across all flood stages, which we link to site specific qualities such as susceptibility to ice-jam flooding. We present this method as a possible application to Alaskan Rivers as a general flood stage guideline.

  16. Robotic fish tracking method based on suboptimal interval Kalman filter

    NASA Astrophysics Data System (ADS)

    Tong, Xiaohong; Tang, Chao

    2017-11-01

    Autonomous Underwater Vehicle (AUV) research focused on tracking and positioning, precise guidance and return to dock and other fields. The robotic fish of AUV has become a hot application in intelligent education, civil and military etc. In nonlinear tracking analysis of robotic fish, which was found that the interval Kalman filter algorithm contains all possible filter results, but the range is wide, relatively conservative, and the interval data vector is uncertain before implementation. This paper proposes a ptimization algorithm of suboptimal interval Kalman filter. Suboptimal interval Kalman filter scheme used the interval inverse matrix with its worst inverse instead, is more approximate nonlinear state equation and measurement equation than the standard interval Kalman filter, increases the accuracy of the nominal dynamic system model, improves the speed and precision of tracking system. Monte-Carlo simulation results show that the optimal trajectory of sub optimal interval Kalman filter algorithm is better than that of the interval Kalman filter method and the standard method of the filter.

  17. A robust two-stage design identifying the optimal biological dose for phase I/II clinical trials.

    PubMed

    Zang, Yong; Lee, J Jack

    2017-01-15

    We propose a robust two-stage design to identify the optimal biological dose for phase I/II clinical trials evaluating both toxicity and efficacy outcomes. In the first stage of dose finding, we use the Bayesian model averaging continual reassessment method to monitor the toxicity outcomes and adopt an isotonic regression method based on the efficacy outcomes to guide dose escalation. When the first stage ends, we use the Dirichlet-multinomial distribution to jointly model the toxicity and efficacy outcomes and pick the candidate doses based on a three-dimensional volume ratio. The selected candidate doses are then seamlessly advanced to the second stage for dose validation. Both toxicity and efficacy outcomes are continuously monitored so that any overly toxic and/or less efficacious dose can be dropped from the study as the trial continues. When the phase I/II trial ends, we select the optimal biological dose as the dose obtaining the minimal value of the volume ratio within the candidate set. An advantage of the proposed design is that it does not impose a monotonically increasing assumption on the shape of the dose-efficacy curve. We conduct extensive simulation studies to examine the operating characteristics of the proposed design. The simulation results show that the proposed design has desirable operating characteristics across different shapes of the underlying true dose-toxicity and dose-efficacy curves. The software to implement the proposed design is available upon request. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  18. An optimal stratified Simon two-stage design.

    PubMed

    Parashar, Deepak; Bowden, Jack; Starr, Colin; Wernisch, Lorenz; Mander, Adrian

    2016-07-01

    In Phase II oncology trials, therapies are increasingly being evaluated for their effectiveness in specific populations of interest. Such targeted trials require designs that allow for stratification based on the participants' molecular characterisation. A targeted design proposed by Jones and Holmgren (JH) Jones CL, Holmgren E: 'An adaptive Simon two-stage design for phase 2 studies of targeted therapies', Contemporary Clinical Trials 28 (2007) 654-661.determines whether a drug only has activity in a disease sub-population or in the wider disease population. Their adaptive design uses results from a single interim analysis to decide whether to enrich the study population with a subgroup or not; it is based on two parallel Simon two-stage designs. We study the JH design in detail and extend it by providing a few alternative ways to control the familywise error rate, in the weak sense as well as the strong sense. We also introduce a novel optimal design by minimising the expected sample size. Our extended design contributes to the much needed framework for conducting Phase II trials in stratified medicine. © 2016 The Authors Pharmaceutical Statistics Published by John Wiley & Sons Ltd. © 2016 The Authors Pharmaceutical Statistics Published by John Wiley & Sons Ltd.

  19. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis.

    PubMed

    Jamshidy, Ladan; Mozaffari, Hamid Reza; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction . One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods . A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results . The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion . The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique.

  20. Availability analysis of mechanical systems with condition-based maintenance using semi-Markov and evaluation of optimal condition monitoring interval

    NASA Astrophysics Data System (ADS)

    Kumar, Girish; Jain, Vipul; Gandhi, O. P.

    2018-03-01

    Maintenance helps to extend equipment life by improving its condition and avoiding catastrophic failures. Appropriate model or mechanism is, thus, needed to quantify system availability vis-a-vis a given maintenance strategy, which will assist in decision-making for optimal utilization of maintenance resources. This paper deals with semi-Markov process (SMP) modeling for steady state availability analysis of mechanical systems that follow condition-based maintenance (CBM) and evaluation of optimal condition monitoring interval. The developed SMP model is solved using two-stage analytical approach for steady-state availability analysis of the system. Also, CBM interval is decided for maximizing system availability using Genetic Algorithm approach. The main contribution of the paper is in the form of a predictive tool for system availability that will help in deciding the optimum CBM policy. The proposed methodology is demonstrated for a centrifugal pump.

  1. Two-stage crossed beam cooling with ⁶Li and ¹³³Cs atoms in microgravity.

    PubMed

    Luan, Tian; Yao, Hepeng; Wang, Lu; Li, Chen; Yang, Shifeng; Chen, Xuzong; Ma, Zhaoyuan

    2015-05-04

    Applying the direct simulation Monte Carlo (DSMC) method developed for ultracold Bose-Fermi mixture gases research, we study the sympathetic cooling process of 6Li and 133Cs atoms in a crossed optical dipole trap. The obstacles to producing 6Li Fermi degenerate gas via direct sympathetic cooling with 133Cs are also analyzed, by which we find that the side-effect of the gravity is one of the main obstacles. Based on the dynamic nature of 6Li and 133Cs atoms, we suggest a two-stage cooling process with two pairs of crossed beams in microgravity environment. According to our simulations, the temperature of 6Li atoms can be cooled to T = 29.5 pK and T/TF = 0.59 with several thousand atoms, which propose a novel way to get ultracold fermion atoms with quantum degeneracy near pico-Kelvin.

  2. Modeling Two-Stage Bunch Compression With Wakefields: Macroscopic Properties And Microbunching Instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bosch, R.A.; Kleman, K.J.; /Wisconsin U., SRC

    2011-09-08

    In a two-stage compression and acceleration system, where each stage compresses a chirped bunch in a magnetic chicane, wakefields affect high-current bunches. The longitudinal wakes affect the macroscopic energy and current profiles of the compressed bunch and cause microbunching at short wavelengths. For macroscopic wavelengths, impedance formulas and tracking simulations show that the wakefields can be dominated by the resistive impedance of coherent edge radiation. For this case, we calculate the minimum initial bunch length that can be compressed without producing an upright tail in phase space and associated current spike. Formulas are also obtained for the jitter in themore » bunch arrival time downstream of the compressors that results from the bunch-to-bunch variation of current, energy, and chirp. Microbunching may occur at short wavelengths where the longitudinal space-charge wakes dominate or at longer wavelengths dominated by edge radiation. We model this range of wavelengths with frequency-dependent impedance before and after each stage of compression. The growth of current and energy modulations is described by analytic gain formulas that agree with simulations.« less

  3. [Study on supply and demand relation based on two stages division of market of Chinese materia medica].

    PubMed

    Yang, Guang; Guo, Lan-Ping; Wang, Nuo; Zeng, Yan; Huang, Lu-Qi

    2014-01-01

    The complex production processes and long industrial chain in traditional Chinese medicine (TCM) market result in difficulty in Chinese market microstructure research. Based on the defining the logical relationships among different concepts. This paper divides TCM market into two stages as Chinese materia medica resource market and traditional Chinese Patent Medicines market. Under this foundation, we investigated the supply capacity, approaching rules and motivation system of suppliers in TCM market, analyzed the demand situation in the perspective of demand side, and evaluated the purchasing power in terms of population profile, income, and insurance. Furthermore we also analyzed the price formation mechanism in two stages of TCM market. We hope this study can make a positive and promotion effect on TCM market related research.

  4. Evaluation of the effect of one stage versus two stage full mouth disinfection on C-reactive protein and leucocyte count in patients with chronic periodontitis

    PubMed Central

    Pabolu, Chandra Mohan; Mutthineni, Ramesh Babu; Chintala, Srikanth; Naheeda; Mutthineni, Navya

    2013-01-01

    Background: Conventional non-surgical periodontal therapy is carried out in quadrant basis with 1-2 week interval. This time lag may result in re-infection of instrumented pocket and may impair healing. Therefore, a new approach to full-mouth non-surgical therapy to be completed within two consecutive days with full-mouth disinfection has been suggested. In periodontitis, leukocyte counts and levels of C-reactive protein (CRP) are likely to be slightly elevated, indicating the presence of infection or inflammation. The aim of this study is to compare the efficacy of one stage and two stage non-surgical therapy on clinical parameters along with CRP levels and total white blood cell (TWBC) count. Materials and Methods: A total of 20 patients were selected and were divided into two groups. Group 1 received one stage full mouth dis-infection and Group 2 received two stages FMD. Plaque index, sulcus bleeding index, probing depth, clinical attachment loss, serum CRP and TWBC count were evaluated for both the groups at baseline and at 1 month post-treatment. Results: The results were analyzed using the Student t-test. Both treatment modalities lead to a significant improvement of the clinical and hematological parameters; however comparison between the two groups showed no significant difference after 1 month. Conclusion: The therapeutic intervention may have a systemic effect on blood count in periodontitis patients. Though one stage FMD had limited benefits over two stages FMD, the therapy can be accomplished in a shorter duration. PMID:24174726

  5. Two stage indirect evaporative cooling system

    DOEpatents

    Bourne, Richard C.; Lee, Brian E.; Callaway, Duncan

    2005-08-23

    A two stage indirect evaporative cooler that moves air from a blower mounted above the unit, vertically downward into dry air passages in an indirect stage and turns the air flow horizontally before leaving the indirect stage. After leaving the dry passages, a major air portion travels into the direct stage and the remainder of the air is induced by a pressure drop in the direct stage to turn 180.degree. and returns horizontally through wet passages in the indirect stage and out of the unit as exhaust air.

  6. A two-stage model of fracture of rocks

    USGS Publications Warehouse

    Kuksenko, V.; Tomilin, N.; Damaskinskaya, E.; Lockner, D.

    1996-01-01

    In this paper we propose a two-stage model of rock fracture. In the first stage, cracks or local regions of failure are uncorrelated occur randomly throughout the rock in response to loading of pre-existing flaws. As damage accumulates in the rock, there is a gradual increase in the probability that large clusters of closely spaced cracks or local failure sites will develop. Based on statistical arguments, a critical density of damage will occur where clusters of flaws become large enough to lead to larger-scale failure of the rock (stage two). While crack interaction and cooperative failure is expected to occur within clusters of closely spaced cracks, the initial development of clusters is predicted based on the random variation in pre-existing Saw populations. Thus the onset of the unstable second stage in the model can be computed from the generation of random, uncorrelated damage. The proposed model incorporates notions of the kinetic (and therefore time-dependent) nature of the strength of solids as well as the discrete hierarchic structure of rocks and the flaw populations that lead to damage accumulation. The advantage offered by this model is that its salient features are valid for fracture processes occurring over a wide range of scales including earthquake processes. A notion of the rank of fracture (fracture size) is introduced, and criteria are presented for both fracture nucleation and the transition of the failure process from one scale to another.

  7. Accuracy of the One-Stage and Two-Stage Impression Techniques: A Comparative Analysis

    PubMed Central

    Jamshidy, Ladan; Faraji, Payam; Sharifi, Roohollah

    2016-01-01

    Introduction. One of the main steps of impression is the selection and preparation of an appropriate tray. Hence, the present study aimed to analyze and compare the accuracy of one- and two-stage impression techniques. Materials and Methods. A resin laboratory-made model, as the first molar, was prepared by standard method for full crowns with processed preparation finish line of 1 mm depth and convergence angle of 3-4°. Impression was made 20 times with one-stage technique and 20 times with two-stage technique using an appropriate tray. To measure the marginal gap, the distance between the restoration margin and preparation finish line of plaster dies was vertically determined in mid mesial, distal, buccal, and lingual (MDBL) regions by a stereomicroscope using a standard method. Results. The results of independent test showed that the mean value of the marginal gap obtained by one-stage impression technique was higher than that of two-stage impression technique. Further, there was no significant difference between one- and two-stage impression techniques in mid buccal region, but a significant difference was reported between the two impression techniques in MDL regions and in general. Conclusion. The findings of the present study indicated higher accuracy for two-stage impression technique than for the one-stage impression technique. PMID:28003824

  8. Bayesian analyses of time-interval data for environmental radiation monitoring.

    PubMed

    Luo, Peng; Sharp, Julia L; DeVol, Timothy A

    2013-01-01

    Time-interval (time difference between two consecutive pulses) analysis based on the principles of Bayesian inference was investigated for online radiation monitoring. Using experimental and simulated data, Bayesian analysis of time-interval data [Bayesian (ti)] was compared with Bayesian and a conventional frequentist analysis of counts in a fixed count time [Bayesian (cnt) and single interval test (SIT), respectively]. The performances of the three methods were compared in terms of average run length (ARL) and detection probability for several simulated detection scenarios. Experimental data were acquired with a DGF-4C system in list mode. Simulated data were obtained using Monte Carlo techniques to obtain a random sampling of the Poisson distribution. All statistical algorithms were developed using the R Project for statistical computing. Bayesian analysis of time-interval information provided a similar detection probability as Bayesian analysis of count information, but the authors were able to make a decision with fewer pulses at relatively higher radiation levels. In addition, for the cases with very short presence of the source (< count time), time-interval information is more sensitive to detect a change than count information since the source data is averaged by the background data over the entire count time. The relationships of the source time, change points, and modifications to the Bayesian approach for increasing detection probability are presented.

  9. Two-Stage Centrifugal Fan

    NASA Technical Reports Server (NTRS)

    Converse, David

    2011-01-01

    Fan designs are often constrained by envelope, rotational speed, weight, and power. Aerodynamic performance and motor electrical performance are heavily influenced by rotational speed. The fan used in this work is at a practical limit for rotational speed due to motor performance characteristics, and there is no more space available in the packaging for a larger fan. The pressure rise requirements keep growing. The way to ordinarily accommodate a higher DP is to spin faster or grow the fan rotor diameter. The invention is to put two radially oriented stages on a single disk. Flow enters the first stage from the center; energy is imparted to the flow in the first stage blades, the flow is redirected some amount opposite to the direction of rotation in the fixed stators, and more energy is imparted to the flow in the second- stage blades. Without increasing either rotational speed or disk diameter, it is believed that as much as 50 percent more DP can be achieved with this design than with an ordinary, single-stage centrifugal design. This invention is useful primarily for fans having relatively low flow rates with relatively high pressure rise requirements.

  10. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    NASA Technical Reports Server (NTRS)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  11. How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models.

    PubMed

    Francq, Bernard G; Govaerts, Bernadette

    2016-06-30

    Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. A generalized weight-based particle-in-cell simulation scheme

    NASA Astrophysics Data System (ADS)

    Lee, W. W.; Jenkins, T. G.; Ethier, S.

    2011-03-01

    A generalized weight-based particle simulation scheme suitable for simulating magnetized plasmas, where the zeroth-order inhomogeneity is important, is presented. The scheme is an extension of the perturbative simulation schemes developed earlier for particle-in-cell (PIC) simulations. The new scheme is designed to simulate both the perturbed distribution ( δf) and the full distribution (full- F) within the same code. The development is based on the concept of multiscale expansion, which separates the scale lengths of the background inhomogeneity from those associated with the perturbed distributions. The potential advantage for such an arrangement is to minimize the particle noise by using δf in the linear stage of the simulation, while retaining the flexibility of a full- F capability in the fully nonlinear stage of the development when signals associated with plasma turbulence are at a much higher level than those from the intrinsic particle noise.

  13. Online two-stage association method for robust multiple people tracking

    NASA Astrophysics Data System (ADS)

    Lv, Jingqin; Fang, Jiangxiong; Yang, Jie

    2011-07-01

    Robust multiple people tracking is very important for many applications. It is a challenging problem due to occlusion and interaction in crowded scenarios. This paper proposes an online two-stage association method for robust multiple people tracking. In the first stage, short tracklets generated by linking people detection responses grow longer by particle filter based tracking, with detection confidence embedded into the observation model. And, an examining scheme runs at each frame for the reliability of tracking. In the second stage, multiple people tracking is achieved by linking tracklets to generate trajectories. An online tracklet association method is proposed to solve the linking problem, which allows applications in time-critical scenarios. This method is evaluated on the popular CAVIAR dataset. The experimental results show that our two-stage method is robust.

  14. Reduction in interval cancer rates following the introduction of two-view mammography in the UK breast screening programme

    PubMed Central

    Dibden, A; Offman, J; Parmar, D; Jenkins, J; Slater, J; Binysh, K; McSorley, J; Scorfield, S; Cumming, P; Liao, X-H; Ryan, M; Harker, D; Stevens, G; Rogers, N; Blanks, R; Sellars, S; Patnick, J; Duffy, S W

    2014-01-01

    Background: The introduction of two-view mammography at incident (subsequent) screens in the National Health Service Breast Screening Programme (NHSBSP) has led to an increased number of cancers detected at screen. However, the effect of two-view mammography on interval cancer rates has yet to be assessed. Methods: Routine screening and interval cancer data were collated from all screening programmes in the United Kingdom for women aged 50–64, screened between 1 April 2003 and 31 March 2005. Interval cancer rates were compared based on whether two-view mammography was in use at the last routine screen. Results: The reduction in interval cancers following screening using two-view mammography compared with one view was 0.68 per 1 000 women screened. Overall, this suggests the introduction of two-view mammography at incident screen was accompanied by a 15–20% reduction in interval cancer rates in the NHSBSP. Conclusion: The introduction of two-view mammography at incident screens is associated with a reduction in incidence of interval cancers. This is consistent with previous publications on a contemporaneous increase in screen-detected cancers. The results provide further evidence of the benefit of the use of two-view mammography at incident screens. PMID:24366303

  15. Design of a High-Energy, Two-Stage Pulsed Plasma Thruster

    NASA Technical Reports Server (NTRS)

    Markusic, T. E.; Thio, Y. C. F.; Cassibry, J. T.; Rodgers, Stephen L. (Technical Monitor)

    2002-01-01

    Design details of a proposed high-energy (approx. 50 kJ/pulse), two-stage pulsed plasma thruster are presented. The long-term goal of this project is to develop a high-power (approx. 500 kW), high specific impulse (approx. 7500 s), highly efficient (approx. 50%),and mechanically simple thruster for use as primary propulsion in a high-power nuclear electric propulsion system. The proposed thruster (PRC-PPT1) utilizes a valveless, liquid lithium-fed thermal plasma injector (first stage) followed by a high-energy pulsed electromagnetic accelerator (second stage). A numerical circuit model coupled with one-dimensional current sheet dynamics, as well as a numerical MHD simulation, are used to qualitatively predict the thermal plasma injection and current sheet dynamics, as well as to estimate the projected performance of the thruster. A set of further modelling efforts, and the experimental testing of a prototype thruster, is suggested to determine the feasibility of demonstrating a full scale high-power thruster.

  16. Stability analysis of a two-stage tapered gyrotron traveling-wave tube amplifier with distributed losses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hung, C. L.; Lian, Y. H.; Cheng, N. H.

    2012-11-15

    The two-stage tapered gyrotron traveling-wave tube (gyro-TWT) amplifier has achieved wide bandwidth in the millimeter wave range. However, possible oscillations in each stage limit this amplifier's operating beam current and thus its output power. To further enhance the amplifier's stability, distributed losses are applied to the interaction circuit of the two-stage tapered gyro-TWT. A self-consistent particle-tracing code is used for analyzing the beam-wave interactions. The stability analysis includes the effects of the wall losses and the length of each stage on the possible oscillations. Simulation results reveal that the distributed-loss method effectively stabilizes all the oscillations in the two stages.more » Under stable operating conditions, the device is predicted to produce a peak power of 60 kW with an efficiency of 29% and a saturated gain of 52 dB in the Ka-band. The 3-dB bandwidth is 5.7 GHz, which is approximately 16% of the center frequency.« less

  17. Self-Consistent Simulation of the Brownian Stage of Dust Growth

    NASA Technical Reports Server (NTRS)

    Kempf, S.; Pfalzner, S.; Henning, Th.

    1996-01-01

    It is a widely accepted view that in proto-planetary accretion disks the collision and following sticking of dust particles embedded in the gas eventually leads to the formation of planetesimals (coagulation). For the smallest dust grains, Brownian motion is assumed to be the dominant source of their relative velocities leading to collisions between these dust grains. As the dust grains grow they eventually couple to the turbulent motion of the gas which then drives the coagulation much more efficiently. Many numerical coagulation simulations have been carried out to calculate the fractal dimension of the aggregates, which determines the duration of the ineffective Brownian stage of growth. Predominantly on-lattice and off-lattice methods were used. However, both methods require simplification of the astrophysical conditions. The aggregates found by those methods had a fractal dimension of approximately 2 which is equivalent to a constant, mass-independent friction time. If this value were valid for the conditions in an accretion disk, this would mean that the coagulation process would finally 'freeze out' and the growth of a planetesimal would be impossible within the lifetime of an accretion disk. In order to investigate whether this fractal dimension is model independent, we simulate self-consistently the Brownian stage of the coagulation by an N-particle code. This method has the advantage that no further assumptions about homogeneity of the dust have to be made. In our model, the dust grains are considered as aggregates built up of spheres. The equation of motion of the dust grains is based on the probability density for the diffusive transport within the gas atmosphere. Because of the very low number density of the dust grains, only 2-body-collisions have to be considered. As the Brownian stage of growth is very inefficient, the system is to be simulated over long periods of time. In order to find close particle pairs of the system which are most likely to

  18. Broad-beam high-current dc ion source based on a two-stage glow discharge plasma.

    PubMed

    Vizir, A V; Oks, E M; Yushkov, G Yu

    2010-02-01

    We have designed, made, and demonstrated a broad-beam, dc, ion source based on a two-stage, hollow-cathode, and glow discharges plasma. The first-stage discharge (auxiliary discharge) produces electrons that are injected into the cathode cavity of a second-stage discharge (main discharge). The electron injection causes a decrease in the required operating pressure of the main discharge down to 0.05 mTorr and a decrease in required operating voltage down to about 50 V. The decrease in operating voltage of the main discharge leads to a decrease in the fraction of impurity ions in the ion beam extracted from the main gas discharge plasma to less than 0.2%. Another feature of the source is a single-grid accelerating system in which the ion accelerating voltage is applied between the plasma itself and the grid electrode. The source has produced steady-state Ar, O, and N ion beams of about 14 cm diameter and current of more than 2 A at an accelerating voltage of up to 2 kV.

  19. A comparison of direct and two-stage transportation of patients to hospital in Poland.

    PubMed

    Rosiek, Anna; Rosiek-Kryszewska, Aleksandra; Leksowski, Łukasz; Leksowski, Krzysztof

    2015-04-24

    The rapid international expansion of telemedicine reflects the growth of technological innovations. This technological advancement is transforming the way in which patients can receive health care. The study was conducted in Poland, at the Department of Cardiology of the Regional Hospital of Louis Rydygier in Torun. The researchers analyzed the delay in the treatment of patients with acute coronary syndrome. The study was conducted as a survey and examined 67 consecutively admitted patients treated invasively in a two-stage transport system. Data were analyzed statistically. Two-stage transportation does not meet the timeframe guidelines for the treatment of patients with acute myocardial infarction. Intervals for the analyzed group of patients were statistically significant (p < 0.0001). Direct transportation of the patient to a reference center with interventional cardiology laboratory has a significant impact on reducing in-hospital delay in case of patients with acute coronary syndrome. This article presents the results of two-stage transportation of the patient with acute coronary syndrome. This measure could help clinicians who seek to assess time needed for intervention. It also shows how time from the beginning of pain in chest is important and may contribute to patient disability, death or well-being.

  20. QUANTIFYING AGGREGATE CHLORPYRIFOS EXPOSURE AND DOSE TO CHILDREN USING A PHYSICALLY-BASED TWO-STAGE MONTE CARLO PROBABILISTIC MODEL

    EPA Science Inventory

    To help address the Food Quality Protection Act of 1996, a physically-based, two-stage Monte Carlo probabilistic model has been developed to quantify and analyze aggregate exposure and dose to pesticides via multiple routes and pathways. To illustrate model capabilities and ide...

  1. An innovative method for offshore wind farm site selection based on the interval number with probability distribution

    NASA Astrophysics Data System (ADS)

    Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng

    2017-12-01

    There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.

  2. A study of intensity, fatigue and precision in two specific interval trainings in young tennis players: high-intensity interval training versus intermittent interval training

    PubMed Central

    Suárez Rodríguez, David; del Valle Soto, Miguel

    2017-01-01

    Background The aim of this study is to find the differences between two specific interval exercises. We begin with the hypothesis that the use of microintervals of work and rest allow for greater intensity of play and a reduction in fatigue. Methods Thirteen competition-level male tennis players took part in two interval training exercises comprising nine 2 min series, which consisted of hitting the ball with cross-court forehand and backhand shots, behind the service box. One was a high-intensity interval training (HIIT), made up of periods of continuous work lasting 2 min, and the other was intermittent interval training (IIT), this time with intermittent 2 min intervals, alternating periods of work with rest periods. Average heart rate (HR) and lactate levels were registered in order to observe the physiological intensity of the two exercises, along with the Borg Scale results for perceived exertion and the number of shots and errors in order to determine the intensity achieved and the degree of fatigue throughout the exercise. Results There were no significant differences in the average heart rate, lactate or the Borg Scale. Significant differences were registered, on the other hand, with a greater number of shots in the first two HIIT series (series 1 p>0.009; series 2 p>0.056), but not in the third. The number of errors was significantly lower in all the IIT series (series 1 p<0.035; series 2 p<0.010; series 3 p<0.001). Conclusion Our study suggests that high-intensity intermittent training allows for greater intensity of play in relation to the real time spent on the exercise, reduced fatigue levels and the maintaining of greater precision in specific tennis-related exercises. PMID:29021912

  3. Drug discrimination under two concurrent fixed-interval fixed-interval schedules.

    PubMed

    McMillan, D E; Li, M

    2000-07-01

    Pigeons were trained to discriminate 5.0 mg/kg pentobarbital from saline under a two-key concurrent fixed-interval (FI) 100-s FI 200-s schedule of food presentation, and later tinder a concurrent FI 40-s FI 80-s schedule, in which the FI component with the shorter time requirement reinforced responding on one key after drug administration (pentobarbital-biased key) and on the other key after saline administration (saline-biased key). After responding stabilized under the concurrent FI 100-s FI 200-s schedule, pigeons earned an average of 66% (after pentobarbital) to 68% (after saline) of their reinforcers for responding under the FI 100-s component of the concurrent schedule. These birds made an average of 70% of their responses on both the pentobarbital-biased key after the training dose of pentobarbital and the saline-biased key after saline. After responding stabilized under the concurrent FI 40-s FI 80-s schedule, pigeons earned an average of 67% of their reinforcers for responding under the FI 40 component after both saline and the training dose of pentobarbital. These birds made an average of 75% of their responses on the pentobarbital-biased key after the training dose of pentobarbital, but only 55% of their responses on the saline-biased key after saline. In test sessions preceded by doses of pentobarbital, chlordiazepoxide, ethanol, phencyclidine, or methamphetamine, the dose-response curves were similar under these two concurrent schedules. Pentobarbital, chlordiazepoxide, and ethanol produced dose-dependent increases in responding on the pentobarbital-biased key as the doses increased. For some birds, at the highest doses of these drugs, the dose-response curve turned over. Increasing doses of phencyclidine produced increased responding on the pentobarbital-biased key in some, but not all, birds. After methamphetamine, responding was largely confined to the saline-biased key. These data show that pigeons can perform drug discriminations under concurrent

  4. Two-axis tracking using translation stages for a lens-to-channel waveguide solar concentrator.

    PubMed

    Liu, Yuxiao; Huang, Ran; Madsen, Christi K

    2014-10-20

    A two-axis tracking scheme designed for <250x concentration realized by a single-axis mechanical tracker and a translation stage is discussed. The translation stage is used for adjusting positions for seasonal sun movement. It has two-dimensional x-y tracking instead of horizontal movement x-only. This tracking method is compatible with planar waveguide solar concentrators. A prototype system with 50x concentration shows >75% optical efficiency throughout the year in simulation and >65% efficiency experimentally. This efficiency can be further improved by the use of anti-reflection layers and a larger waveguide refractive index.

  5. Asymptotic confidence intervals for the Pearson correlation via skewness and kurtosis.

    PubMed

    Bishara, Anthony J; Li, Jiexiang; Nash, Thomas

    2018-02-01

    When bivariate normality is violated, the default confidence interval of the Pearson correlation can be inaccurate. Two new methods were developed based on the asymptotic sampling distribution of Fisher's z' under the general case where bivariate normality need not be assumed. In Monte Carlo simulations, the most successful of these methods relied on the (Vale & Maurelli, 1983, Psychometrika, 48, 465) family to approximate a distribution via the marginal skewness and kurtosis of the sample data. In Simulation 1, this method provided more accurate confidence intervals of the correlation in non-normal data, at least as compared to no adjustment of the Fisher z' interval, or to adjustment via the sample joint moments. In Simulation 2, this approximate distribution method performed favourably relative to common non-parametric bootstrap methods, but its performance was mixed relative to an observed imposed bootstrap and two other robust methods (PM1 and HC4). No method was completely satisfactory. An advantage of the approximate distribution method, though, is that it can be implemented even without access to raw data if sample skewness and kurtosis are reported, making the method particularly useful for meta-analysis. Supporting information includes R code. © 2017 The British Psychological Society.

  6. Accuracy of lung nodule density on HRCT: analysis by PSF-based image simulation.

    PubMed

    Ohno, Ken; Ohkubo, Masaki; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2012-11-08

    A computed tomography (CT) image simulation technique based on the point spread function (PSF) was applied to analyze the accuracy of CT-based clinical evaluations of lung nodule density. The PSF of the CT system was measured and used to perform the lung nodule image simulation. Then, the simulated image was resampled at intervals equal to the pixel size and the slice interval found in clinical high-resolution CT (HRCT) images. On those images, the nodule density was measured by placing a region of interest (ROI) commonly used for routine clinical practice, and comparing the measured value with the true value (a known density of object function used in the image simulation). It was quantitatively determined that the measured nodule density depended on the nodule diameter and the image reconstruction parameters (kernel and slice thickness). In addition, the measured density fluctuated, depending on the offset between the nodule center and the image voxel center. This fluctuation was reduced by decreasing the slice interval (i.e., with the use of overlapping reconstruction), leading to a stable density evaluation. Our proposed method of PSF-based image simulation accompanied with resampling enables a quantitative analysis of the accuracy of CT-based evaluations of lung nodule density. These results could potentially reveal clinical misreadings in diagnosis, and lead to more accurate and precise density evaluations. They would also be of value for determining the optimum scan and reconstruction parameters, such as image reconstruction kernels and slice thicknesses/intervals.

  7. Two-stage coal liquefaction process

    DOEpatents

    Skinner, Ronald W.; Tao, John C.; Znaimer, Samuel

    1985-01-01

    An improved SRC-I two-stage coal liquefaction process which improves the product slate is provided. Substantially all of the net yield of 650.degree.-850.degree. F. heavy distillate from the LC-Finer is combined with the SRC process solvent, substantially all of the net 400.degree.-650.degree. F. middle distillate from the SRC section is combined with the hydrocracker solvent in the LC-Finer, and the initial boiling point of the SRC process solvent is increased sufficiently high to produce a net yield of 650.degree.-850.degree. F. heavy distillate of zero for the two-stage liquefaction process.

  8. Two-stage plasma gun based on a gas discharge with a self-heating hollow emitter.

    PubMed

    Vizir, A V; Tyunkov, A V; Shandrikov, M V; Oks, E M

    2010-02-01

    The paper presents the results of tests of a new compact two-stage bulk gas plasma gun. The plasma gun is based on a nonself-sustained gas discharge with an electron emitter based on a discharge with a self-heating hollow cathode. The operating characteristics of the plasma gun are investigated. The discharge system makes it possible to produce uniform and stable gas plasma in the dc mode with a plasma density up to 3x10(9) cm(-3) at an operating gas pressure in the vacuum chamber of less than 2x10(-2) Pa. The device features high power efficiency, design simplicity, and compactness.

  9. Simulation of the Two Stages Stretch-Blow Molding Process: Infrared Heating and Blowing Modeling

    NASA Astrophysics Data System (ADS)

    Bordival, M.; Schmidt, F. M.; Le Maoult, Y.; Velay, V.

    2007-05-01

    In the Stretch-Blow Molding (SBM) process, the temperature distribution of the reheated perform affects drastically the blowing kinematic, the bottle thickness distribution, as well as the orientation induced by stretching. Consequently, mechanical and optical properties of the final bottle are closely related to heating conditions. In order to predict the 3D temperature distribution of a rotating preform, numerical software using control-volume method has been developed. Since PET behaves like a semi-transparent medium, the radiative flux absorption was computed using Beer Lambert law. In a second step, 2D axi-symmetric simulations of the SBM have been developed using the finite element package ABAQUS®. Temperature profiles through the preform wall thickness and along its length were computed and applied as initial condition. Air pressure inside the preform was not considered as an input variable, but was automatically computed using a thermodynamic model. The heat transfer coefficient applied between the mold and the polymer was also measured. Finally, the G'sell law was used for modeling PET behavior. For both heating and blowing stage simulations, a good agreement has been observed with experimental measurements. This work is part of the European project "APT_PACK" (Advanced knowledge of Polymer deformation for Tomorrow's PACKaging).

  10. Technologies of stage magic: Simulation and dissimulation.

    PubMed

    Smith, Wally

    2015-06-01

    The craft of stage magic is presented in this article as a site to study the interplay of people and technology. The focus is on conjuring in the 19th and early 20th centuries, a time when magicians eagerly appropriated new optical, mechanical and electrical technologies into their acts. Also at this time, a modern style of conjuring emerged, characterized by minimal apparatus and a natural manner of performance. Applying Lucy Suchman's perspective of human-machine reconfigurations, conjuring in this modern style is interpreted as an early form of simulation, coupled with techniques of dissimulation. Magicians simulated the presence of supernational agency for public audiences, while dissimulating the underlying methods and mechanisms. Dissimulation implies that the secret inner workings of apparatus were not simply concealed but were rendered absent. This, in turn, obscured the production of supernatural effects in the translation of agencies within an assembly of performers, assistants, apparatus, apparatus-builders, and so on. How this was achieved is investigated through an analysis of key instructional texts written by and for magicians working in the modern style. Techniques of dissimulation are identified in the design of apparatus for three stage illusions, and in the new naturalness of the performer's manner. To explore the significance of this picture of stage magic, and its reliance on techniques of dissimulation, a parallel is drawn between conjuring and recent performances of computerized life forms, especially those of social robotics. The paper concludes by considering what is revealed about the production of agency in stage magic's peculiar human-machine assemblies.

  11. Integrating Growth Stage Deficit Irrigation into a Process Based Crop Model

    NASA Technical Reports Server (NTRS)

    Lopez, Jose R.; Winter, Jonathan M.; Elliott, Joshua; Ruane, Alex C.; Porter, Cheryl; Hoogenboom, Gerrit

    2017-01-01

    Current rates of agricultural water use are unsustainable in many regions, creating an urgent need to identify improved irrigation strategies for water limited areas. Crop models can be used to quantify plant water requirements, predict the impact of water shortages on yield, and calculate water productivity (WP) to link water availability and crop yields for economic analyses. Many simulations of crop growth and development, especially in regional and global assessments, rely on automatic irrigation algorithms to estimate irrigation dates and amounts. However, these algorithms are not well suited for water limited regions because they have simplistic irrigation rules, such as a single soil-moisture based threshold, and assume unlimited water. To address this constraint, a new modeling framework to simulate agricultural production in water limited areas was developed. The framework consists of a new automatic irrigation algorithm for the simulation of growth stage based deficit irrigation under limited seasonal water availability; and optimization of growth stage specific parameters. The new automatic irrigation algorithm was used to simulate maize and soybean in Gainesville, Florida, and first used to evaluate the sensitivity of maize and soybean simulations to irrigation at different growth stages and then to test the hypothesis that water productivity calculated using simplistic irrigation rules underestimates WP. In the first experiment, the effect of irrigating at specific growth stages on yield and irrigation water use efficiency (IWUE) in maize and soybean was evaluated. In the reproductive stages, IWUE tended to be higher than in the vegetative stages (e.g. IWUE was 18% higher than the well watered treatment when irrigating only during R3 in soybean), and when rainfall events were less frequent. In the second experiment, water productivity (WP) was significantly greater with optimized irrigation schedules compared to non-optimized irrigation schedules in

  12. Time interval measurement device based on surface acoustic wave filter excitation, providing 1 ps precision and stability.

    PubMed

    Panek, Petr; Prochazka, Ivan

    2007-09-01

    This article deals with the time interval measurement device, which is based on a surface acoustic wave (SAW) filter as a time interpolator. The operating principle is based on the fact that a transversal SAW filter excited by a short pulse can generate a finite signal with highly suppressed spectra outside a narrow frequency band. If the responses to two excitations are sampled at clock ticks, they can be precisely reconstructed from a finite number of samples and then compared so as to determine the time interval between the two excitations. We have designed and constructed a two-channel time interval measurement device which allows independent timing of two events and evaluation of the time interval between them. The device has been constructed using commercially available components. The experimental results proved the concept. We have assessed the single-shot time interval measurement precision of 1.3 ps rms that corresponds to the time of arrival precision of 0.9 ps rms in each channel. The temperature drift of the measured time interval on temperature is lower than 0.5 ps/K, and the long term stability is better than +/-0.2 ps/h. These are to our knowledge the best values reported for the time interval measurement device. The results are in good agreement with the error budget based on the theoretical analysis.

  13. Two-stage acceleration of protons from relativistic laser-solid interaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Jinlu; Sheng, Z. M.; Zheng, J.

    2012-12-21

    A two-stage proton acceleration scheme using present-day intense lasers and a unique target design is proposed. The target system consists of a hollow cylinder, inside which is a hollow cone, which is followed by the main target with a flat front and dish-like flared rear surface. At the center of the latter is a tapered proton layer, which is surrounded by outer proton layers at an angle to it. In the first acceleration stage, protons in both layers are accelerated by target normal sheath acceleration. The center-layer protons are accelerated forward along the axis and the side protons are acceleratedmore » and focused towards them. As a result, the side-layer protons radially compress as well as axially further accelerate the front part of the accelerating center-layer protons in the second stage, which are also radially confined and guided by the field of the fast electrons surrounding them. Two-dimensional particle-incell simulation shows that a 79fs 8.5 Multiplication-Sign 10{sup 20} W/cm{sup 2} laser pulse can produce a proton bunch with {approx} 267MeV maximum energy and {approx} 9.5% energy spread, which may find many applications, including cancer therapy.« less

  14. Two-stage combustion for reducing pollutant emissions from gas turbine combustors

    NASA Technical Reports Server (NTRS)

    Clayton, R. M.; Lewis, D. H.

    1981-01-01

    Combustion and emission results are presented for a premix combustor fueled with admixtures of JP5 with neat H2 and of JP5 with simulated partial-oxidation product gas. The combustor was operated with inlet-air state conditions typical of cruise power for high performance aviation engines. Ultralow NOx, CO and HC emissions and extended lean burning limits were achieved simultaneously. Laboratory scale studies of the non-catalyzed rich-burning characteristics of several paraffin-series hydrocarbon fuels and of JP5 showed sooting limits at equivalence ratios of about 2.0 and that in order to achieve very rich sootless burning it is necessary to premix the reactants thoroughly and to use high levels of air preheat. The application of two-stage combustion for the reduction of fuel NOx was reviewed. An experimental combustor designed and constructed for two-stage combustion experiments is described.

  15. Estimation of TOA based MUSIC algorithm and cross correlation algorithm of appropriate interval

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Liu, Jun; Zhou, Yineng; Huang, Jiyan

    2017-03-01

    Localization of mobile station (MS) has now gained considerable attention due to its wide applications in military, environmental, health and commercial systems. Phrase angle and encode data of MSK system model are two critical parameters in time-of-arrival (TOA) localization technique; nevertheless, precise value of phrase angle and encode data are not easy to achieved in general. In order to meet the actual situation, we should consider the condition that phase angle and encode data is unknown. In this paper, a novel TOA localization method, which combine MUSIC algorithm and cross correlation algorithm in an appropriate interval, is proposed. Simulations show that the proposed method has better performance than music algorithm and cross correlation algorithm of the whole interval.

  16. Montessori-based activities among persons with late-stage dementia: Evaluation of mental and behavioral health outcomes.

    PubMed

    Wilks, Scott E; Boyd, P August; Bates, Samantha M; Cain, Daphne S; Geiger, Jennifer R

    2017-01-01

    Objectives Literature regarding Montessori-based activities with older adults with dementia is fairly common with early stages of dementia. Conversely, research on said activities with individuals experiencing late-stage dementia is limited because of logistical difficulties in sampling and data collection. Given the need to understand risks and benefits of treatments for individuals with late-stage dementia, specifically regarding their mental and behavioral health, this study sought to evaluate the effects of a Montessori-based activity program implemented in a long-term care facility. Method Utilizing an interrupted time series design, trained staff completed observation-based measures for 43 residents with late-stage dementia at three intervals over six months. Empirical measures assessed mental health (anxiety, psychological well-being, quality of life) and behavioral health (problem behaviors, social engagement, capacity for activities of daily living). Results Group differences were observed via repeated measures ANOVA and paired-samples t-tests. The aggregate, longitudinal results-from baseline to final data interval-for the psychological and behavioral health measures were as follows: problem behaviors diminished though not significantly; social engagement decreased significantly; capacities for activities of daily living decreased significantly; quality of life increased slightly but not significantly; anxiety decreased slightly but not significantly; and psychological well-being significantly decreased. Conclusion Improvements observed for quality of life and problem behaviors may yield promise for Montessori-based activities and related health care practices. The rapid physiological and cognitive deterioration from late-stage dementia should be considered when interpreting these results.

  17. Development, current applications and future roles of biorelevant two-stage in vitro testing in drug development.

    PubMed

    Fiolka, Tom; Dressman, Jennifer

    2018-03-01

    Various types of two stage in vitro testing have been used in a number of experimental settings. In addition to its application in quality control and for regulatory purposes, two-stage in vitro testing has also been shown to be a valuable technique to evaluate the supersaturation and precipitation behavior of poorly soluble drugs during drug development. The so-called 'transfer model', which is an example of two-stage testing, has provided valuable information about the in vivo performance of poorly soluble, weakly basic drugs by simulating the gastrointestinal drug transit from the stomach into the small intestine with a peristaltic pump. The evolution of the transfer model has resulted in various modifications of the experimental model set-up. Concomitantly, various research groups have developed simplified approaches to two-stage testing to investigate the supersaturation and precipitation behavior of weakly basic drugs without the necessity of using a transfer pump. Given the diversity among the various two-stage test methods available today, a more harmonized approach needs to be taken to optimize the use of two stage testing at different stages of drug development. © 2018 Royal Pharmaceutical Society.

  18. Coordinates and intervals in graph-based reference genomes.

    PubMed

    Rand, Knut D; Grytten, Ivar; Nederbragt, Alexander J; Storvik, Geir O; Glad, Ingrid K; Sandve, Geir K

    2017-05-18

    It has been proposed that future reference genomes should be graph structures in order to better represent the sequence diversity present in a species. However, there is currently no standard method to represent genomic intervals, such as the positions of genes or transcription factor binding sites, on graph-based reference genomes. We formalize offset-based coordinate systems on graph-based reference genomes and introduce methods for representing intervals on these reference structures. We show the advantage of our methods by representing genes on a graph-based representation of the newest assembly of the human genome (GRCh38) and its alternative loci for regions that are highly variable. More complex reference genomes, containing alternative loci, require methods to represent genomic data on these structures. Our proposed notation for genomic intervals makes it possible to fully utilize the alternative loci of the GRCh38 assembly and potential future graph-based reference genomes. We have made a Python package for representing such intervals on offset-based coordinate systems, available at https://github.com/uio-cels/offsetbasedgraph . An interactive web-tool using this Python package to visualize genes on a graph created from GRCh38 is available at https://github.com/uio-cels/genomicgraphcoords .

  19. A Bayesian-frequentist two-stage single-arm phase II clinical trial design.

    PubMed

    Dong, Gaohong; Shih, Weichung Joe; Moore, Dirk; Quan, Hui; Marcella, Stephen

    2012-08-30

    It is well-known that both frequentist and Bayesian clinical trial designs have their own advantages and disadvantages. To have better properties inherited from these two types of designs, we developed a Bayesian-frequentist two-stage single-arm phase II clinical trial design. This design allows both early acceptance and rejection of the null hypothesis ( H(0) ). The measures (for example probability of trial early termination, expected sample size, etc.) of the design properties under both frequentist and Bayesian settings are derived. Moreover, under the Bayesian setting, the upper and lower boundaries are determined with predictive probability of trial success outcome. Given a beta prior and a sample size for stage I, based on the marginal distribution of the responses at stage I, we derived Bayesian Type I and Type II error rates. By controlling both frequentist and Bayesian error rates, the Bayesian-frequentist two-stage design has special features compared with other two-stage designs. Copyright © 2012 John Wiley & Sons, Ltd.

  20. A two-stage series diode for intense large-area moderate pulsed X rays production.

    PubMed

    Lai, Dingguo; Qiu, Mengtong; Xu, Qifu; Su, Zhaofeng; Li, Mo; Ren, Shuqing; Huang, Zhongliang

    2017-01-01

    This paper presents a method for moderate pulsed X rays produced by a series diode, which can be driven by high voltage pulse to generate intense large-area uniform sub-100-keV X rays. A two stage series diode was designed for Flash-II accelerator and experimentally investigated. A compact support system of floating converter/cathode was invented, the extra cathode is floating electrically and mechanically, by withdrawing three support pins several milliseconds before a diode electrical pulse. A double ring cathode was developed to improve the surface electric field and emission stability. The cathode radii and diode separation gap were optimized to enhance the uniformity of X rays and coincidence of the two diode voltages based on the simulation and theoretical calculation. The experimental results show that the two stage series diode can work stably under 700 kV and 300 kA, the average energy of X rays is 86 keV, and the dose is about 296 rad(Si) over 615 cm 2 area with uniformity 2:1 at 5 cm from the last converter. Compared with the single diode, the average X rays' energy reduces from 132 keV to 88 keV, and the proportion of sub-100-keV photons increases from 39% to 69%.

  1. Taurus II Stage Test Simulations: Using Large-Scale CFD Simulations to Provide Critical Insight into Plume Induced Environments During Design

    NASA Technical Reports Server (NTRS)

    Struzenberg, L. L.; West, J. S.

    2011-01-01

    This paper describes the use of targeted Loci/CHEM CFD simulations to evaluate the effects of a dual-engine first-stage hot-fire test on an evolving integrated launch pad/test article design. This effort was undertaken as a part of the NESC Independent Assessment of the Taurus II Stage Test Series. The underlying conceptual model included development of a series of computational models and simulations to analyze the plume induced environments on the pad, facility structures and test article. A pathfinder simulation was first developed, capable of providing quick-turn around evaluation of plume impingement pressures on the flame deflector. Results from this simulation were available in time to provide data for an ongoing structural assessment of the deflector. The resulting recommendation was available in a timely manner and was incorporated into construction schedule for the new launch stand under construction at Wallops Flight Facility. A series of Reynolds-Averaged Navier-Stokes (RANS) quasi-steady simulations representative of various key elements of the test profile was performed to identify potential concerns with the test configuration and test profile. As required, unsteady Hybrid-RANS/LES simulations were performed, to provide additional insight into critical aspects of the test sequence. Modifications to the test-specific hardware and facility structures thermal protection as well as modifications to the planned hot-fire test profile were implemented based on these simulation results.

  2. High-End Concept Based on Hypersonic Two-Stage Rocket and Electro-Magnetic Railgun to Launch Micro-Satellites Into Low-Earth

    NASA Astrophysics Data System (ADS)

    Bozic, O.; Longo, J. M.; Giese, P.; Behren, J.

    2005-02-01

    The electromagnetic railgun technology appears to be an interesting alternative to launch small payloads into Low Earth Orbit (LEO), as this may introduce lower launch costs. A high-end solution, based upon present state of the art technology, has been investigated to derive the technical boundary conditions for the application of such a new system. This paper presents the main concept and the design aspects of such propelled projectile with special emphasis on flight mechanics, aero-/thermodynamics, materials and propulsion characteristics. Launch angles and trajectory optimisation analyses are carried out by means of 3 degree of freedom simulations (3DOF). The aerodynamic form of the projectile is optimised to provoke minimum drag and low heat loads. The surface temperature distribution for critical zones is calculated with DLR developed Navier-Stokes codes TAU, HOTSOSE, whereas the engineering tool HF3T is used for time dependent calculations of heat loads and temperatures on project surface and inner structures. Furthermore, competing propulsions systems are considered for the rocket engines of both stages. The structural mass is analysed mostly on the basis of carbon fibre reinforced materials as well as classical aerospace metallic materials. Finally, this paper gives a critical overview of the technical feasibility and cost of small rockets for such missions. Key words: micro-satellite, two-stage-rocket, railgun, rocket-engines, aero/thermodynamic, mass optimization

  3. Distributed Simulation as a modelling tool for the development of a simulation-based training programme for cardiovascular specialties.

    PubMed

    Kelay, Tanika; Chan, Kah Leong; Ako, Emmanuel; Yasin, Mohammad; Costopoulos, Charis; Gold, Matthew; Kneebone, Roger K; Malik, Iqbal S; Bello, Fernando

    2017-01-01

    Distributed Simulation is the concept of portable, high-fidelity immersive simulation. Here, it is used for the development of a simulation-based training programme for cardiovascular specialities. We present an evidence base for how accessible, portable and self-contained simulated environments can be effectively utilised for the modelling, development and testing of a complex training framework and assessment methodology. Iterative user feedback through mixed-methods evaluation techniques resulted in the implementation of the training programme. Four phases were involved in the development of our immersive simulation-based training programme: ( 1) initial conceptual stage for mapping structural criteria and parameters of the simulation training framework and scenario development ( n  = 16), (2) training facility design using Distributed Simulation , (3) test cases with clinicians ( n  = 8) and collaborative design, where evaluation and user feedback involved a mixed-methods approach featuring (a) quantitative surveys to evaluate the realism and perceived educational relevance of the simulation format and framework for training and (b) qualitative semi-structured interviews to capture detailed feedback including changes and scope for development. Refinements were made iteratively to the simulation framework based on user feedback, resulting in (4) transition towards implementation of the simulation training framework, involving consistent quantitative evaluation techniques for clinicians ( n  = 62). For comparative purposes, clinicians' initial quantitative mean evaluation scores for realism of the simulation training framework, realism of the training facility and relevance for training ( n  = 8) are presented longitudinally, alongside feedback throughout the development stages from concept to delivery, including the implementation stage ( n  = 62). Initially, mean evaluation scores fluctuated from low to average, rising incrementally. This corresponded

  4. Two stage gear tooth dynamics program

    NASA Technical Reports Server (NTRS)

    Boyd, Linda S.

    1989-01-01

    The epicyclic gear dynamics program was expanded to add the option of evaluating the tooth pair dynamics for two epicyclic gear stages with peripheral components. This was a practical extension to the program as multiple gear stages are often used for speed reduction, space, weight, and/or auxiliary units. The option was developed for either stage to be a basic planetary, star, single external-external mesh, or single external-internal mesh. The two stage system allows for modeling of the peripherals with an input mass and shaft, an output mass and shaft, and a connecting shaft. Execution of the initial test case indicated an instability in the solution with the tooth paid loads growing to excessive magnitudes. A procedure to trace the instability is recommended as well as a method of reducing the program's computation time by reducing the number of boundary condition iterations.

  5. Computer simulations in the high school: students' cognitive stages, science process skills and academic achievement in microbiology

    NASA Astrophysics Data System (ADS)

    Huppert, J.; Michal Lomask, S.; Lazarowitz, R.

    2002-08-01

    Computer-assisted learning, including simulated experiments, has great potential to address the problem solving process which is a complex activity. It requires a highly structured approach in order to understand the use of simulations as an instructional device. This study is based on a computer simulation program, 'The Growth Curve of Microorganisms', which required tenth grade biology students to use problem solving skills whilst simultaneously manipulating three independent variables in one simulated experiment. The aims were to investigate the computer simulation's impact on students' academic achievement and on their mastery of science process skills in relation to their cognitive stages. The results indicate that the concrete and transition operational students in the experimental group achieved significantly higher academic achievement than their counterparts in the control group. The higher the cognitive operational stage, the higher students' achievement was, except in the control group where students in the concrete and transition operational stages did not differ. Girls achieved equally with the boys in the experimental group. Students' academic achievement may indicate the potential impact a computer simulation program can have, enabling students with low reasoning abilities to cope successfully with learning concepts and principles in science which require high cognitive skills.

  6. Performance of two-stage fan with larger dampers on first-stage rotor

    NASA Technical Reports Server (NTRS)

    Urasek, D. C.; Cunnan, W. S.; Stevans, W.

    1979-01-01

    The performance of a two stage, high pressure-ratio fan, having large, part-span vibration dampers on the first stage rotor is presented and compared with an identical aerodynamically designed fan having smaller dampers. Comparisons of the data for the two damper configurations show that with increased damper size: (1) very high losses in the damper region reduced overall efficiency of first stage rotor by approximately 3 points, (2) the overall performance of each blade row, downstream of the damper was not significantly altered, although appreciable differences in the radial distributions of various performance parameters were noted, and (3) the lower performance of the first stage rotor decreased the overall fan efficiency more than 1 percentage point.

  7. Constraint-based Attribute and Interval Planning

    NASA Technical Reports Server (NTRS)

    Jonsson, Ari; Frank, Jeremy

    2013-01-01

    In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.

  8. Two-stage free electron laser research

    NASA Astrophysics Data System (ADS)

    Segall, S. B.

    1984-10-01

    KMS Fusion, Inc. began studying the feasibility of two-stage free electron lasers for the Office of Naval Research in June, 1980. At that time, the two-stage FEL was only a concept that had been proposed by Luis Elias. The range of parameters over which such a laser could be successfully operated, attainable power output, and constraints on laser operation were not known. The primary reason for supporting this research at that time was that it had the potential for producing short-wavelength radiation using a relatively low voltage electron beam. One advantage of a low-voltage two-stage FEL would be that shielding requirements would be greatly reduced compared with single-stage short-wavelength FEL's. If the electron energy were kept below about 10 MeV, X-rays, generated by electrons striking the beam line wall, would not excite neutron resonance in atomic nuclei. These resonances cause the emission of neutrons with subsequent induced radioactivity. Therefore, above about 10 MeV, a meter or more of concrete shielding is required for the system, whereas below 10 MeV, a few millimeters of lead would be adequate.

  9. Ratio-based lengths of intervals to improve fuzzy time series forecasting.

    PubMed

    Huarng, Kunhuang; Yu, Tiffany Hui-Kuang

    2006-04-01

    The objective of this study is to explore ways of determining the useful lengths of intervals in fuzzy time series. It is suggested that ratios, instead of equal lengths of intervals, can more properly represent the intervals among observations. Ratio-based lengths of intervals are, therefore, proposed to improve fuzzy time series forecasting. Algebraic growth data, such as enrollments and the stock index, and exponential growth data, such as inventory demand, are chosen as the forecasting targets, before forecasting based on the various lengths of intervals is performed. Furthermore, sensitivity analyses are also carried out for various percentiles. The ratio-based lengths of intervals are found to outperform the effective lengths of intervals, as well as the arbitrary ones in regard to the different statistical measures. The empirical analysis suggests that the ratio-based lengths of intervals can also be used to improve fuzzy time series forecasting.

  10. 10 m/25 Gbps LiFi transmission system based on a two-stage injection-locked 680 nm VCSEL transmitter.

    PubMed

    Lu, Hai-Han; Li, Chung-Yi; Chu, Chien-An; Lu, Ting-Chien; Chen, Bo-Rui; Wu, Chang-Jen; Lin, Dai-Hua

    2015-10-01

    A 10  m/25  Gbps light-based WiFi (LiFi) transmission system based on a two-stage injection-locked 680 nm vertical-cavity surface-emitting laser (VCSEL) transmitter is proposed. A LiFi transmission system with a data rate of 25 Gbps is experimentally demonstrated over a 10 m free-space link. To the best of our knowledge, it is the first time a two-stage injection-locked 680 nm VCSEL transmitter in a 10  m/25  Gbps LiFi transmission system has been employed. Impressive bit error rate performance and a clear eye diagram are achieved in the proposed systems. Such a 10  m/25  Gbps LiFi transmission system provides the advantage of a communication link for higher data rates that could accelerate the deployment of visible laser light communication.

  11. Simulation-based modeling of building complexes construction management

    NASA Astrophysics Data System (ADS)

    Shepelev, Aleksandr; Severova, Galina; Potashova, Irina

    2018-03-01

    The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.

  12. Interval data clustering using self-organizing maps based on adaptive Mahalanobis distances.

    PubMed

    Hajjar, Chantal; Hamdan, Hani

    2013-10-01

    The self-organizing map is a kind of artificial neural network used to map high dimensional data into a low dimensional space. This paper presents a self-organizing map for interval-valued data based on adaptive Mahalanobis distances in order to do clustering of interval data with topology preservation. Two methods based on the batch training algorithm for the self-organizing maps are proposed. The first method uses a common Mahalanobis distance for all clusters. In the second method, the algorithm starts with a common Mahalanobis distance per cluster and then switches to use a different distance per cluster. This process allows a more adapted clustering for the given data set. The performances of the proposed methods are compared and discussed using artificial and real interval data sets. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Nonlinear PP and PS joint inversion based on the exact Zoeppritz equations: a two-stage procedure

    NASA Astrophysics Data System (ADS)

    Zhi, Lixia; Chen, Shuangquan; Song, Baoshan; Li, Xiang-yang

    2018-04-01

    S-velocity and density are very important parameters in distinguishing lithology and estimating other petrophysical properties. A reliable estimate of S-velocity and density is very difficult to obtain, even from long-offset gather data. Joint inversion of PP and PS data provides a promising strategy for stabilizing and improving the results of inversion in estimating elastic parameters and density. For 2D or 3D inversion, the trace-by-trace strategy is still the most widely used method although it often suffers from a lack of clarity because of its high efficiency, which is due to parallel computing. This paper describes a two-stage inversion method for nonlinear PP and PS joint inversion based on the exact Zoeppritz equations. There are several advantages for our proposed methods as follows: (1) Thanks to the exact Zoeppritz equation, our joint inversion method is applicable for wide angle amplitude-versus-angle inversion; (2) The use of both P- and S-wave information can further enhance the stability and accuracy of parameter estimation, especially for the S-velocity and density; (3) The two-stage inversion procedure proposed in this paper can achieve a good compromise between efficiency and precision. On the one hand, the trace-by-trace strategy used in the first stage can be processed in parallel so that it has high computational efficiency. On the other hand, to deal with the indistinctness of and undesired disturbances to the inversion results obtained from the first stage, we apply the second stage—total variation (TV) regularization. By enforcing spatial and temporal constraints, the TV regularization stage deblurs the inversion results and leads to parameter estimation with greater precision. Notably, the computation consumption of the TV regularization stage can be ignored compared to the first stage because it is solved using the fast split Bregman iterations. Numerical examples using a well log and the Marmousi II model show that the proposed joint

  14. A stage is a stage is a stage: a direct comparison of two scoring systems.

    PubMed

    Dawson, Theo L

    2003-09-01

    L. Kohlberg (1969) argued that his moral stages captured a developmental sequence specific to the moral domain. To explore that contention, the author compared stage assignments obtained with the Standard Issue Scoring System (A. Colby & L. Kohlberg, 1987a, 1987b) and those obtained with a generalized content-independent stage-scoring system called the Hierarchical Complexity Scoring System (T. L. Dawson, 2002a), on 637 moral judgment interviews (participants' ages ranged from 5 to 86 years). The correlation between stage scores produced with the 2 systems was .88. Although standard issue scoring and hierarchical complexity scoring often awarded different scores up to Kohlberg's Moral Stage 2/3, from his Moral Stage 3 onward, scores awarded with the two systems predominantly agreed. The author explores the implications for developmental research.

  15. High GMS score hypospadias: Outcomes after one- and two-stage operations.

    PubMed

    Huang, Jonathan; Rayfield, Lael; Broecker, Bruce; Cerwinka, Wolfgang; Kirsch, Andrew; Scherz, Hal; Smith, Edwin; Elmore, James

    2017-06-01

    Established criteria to assist surgeons in deciding between a one- or two-stage operation for severe hypospadias are lacking. While anatomical features may preclude some surgical options, the decision to approach severe hypospadias in a one- or two-stage fashion is generally based on individual surgeon preference. This decision has been described as a dilemma as outcomes range widely and there is lack of evidence supporting the superiority of one approach over the other. The aim of this study is to determine whether the GMS hypospadias score may provide some guidance in choosing the surgical approach used for correction of severe hypospadias. GMS scores were preoperatively assigned to patients having primary surgery for hypospadias. Those patients having surgery for the most severe hypospadias were selected and formed the study cohort. The records of these patients were reviewed and pertinent data collected. Complications requiring further surgery were assessed and correlated with the GMS score and the surgical technique used for repair (one-stage vs. two-stage). Eighty-seven boys were identified with a GMS score (range 3-12) of 10 or higher. At a mean follow-up of 22 months the overall complication rate for the cohort after final planned surgery was 39%. For intended one-stage procedures (n = 48) an acceptable result was achieved with one surgery for 28 patients (58%), with two surgeries for 14 (29%), and with three to five surgeries for six (13%). For intended two-stage procedures (n = 39) an acceptable result was achieved with two surgeries for 26 patients (67%), three surgeries for eight (21%), and four surgeries for three (8%). Two other patients having two-stage surgery required seven surgeries to achieve an acceptable result. Complication rates are summarized in the Table. The complication rates for GMS 10 patients were similar (27% and 33%, p = 0.28) for one- and two-stage repairs, respectively. GMS 11 patients having a one-stage repair had a

  16. [Two-stage revision of infected total knee arthroplasty using antibiotic-impregnated articulating cement spacer].

    PubMed

    Cai, Pengde; Hu, Yihe; Xie, Lie; Wang, Long

    2012-10-01

    To investigate the effectiveness of two-stage revision of infected total knee arthroplasty (TKA) using an antibiotic-impregnated articulating cement spacer. The clinical data were analyzed from 23 patients (23 knees) undergoing two-stage revision for late infection after primary TKA between January 2007 and December 2009. There were 15 males and 8 females, aged from 43 to 75 years (mean, 65.2 years). Infection occurred at 13-52 months (mean, 17.3 months) after TKA. The time interval between infection and admission ranged from 15 days to 7 months (mean, 2.1 months). One-stage operation included surgical debridement and removal of all knee prosthesis and cement, then an antibiotic-impregnated articulating cement spacer was implanted. The re-implantation of prosthesis was performed after 8-10 weeks when infections were controlled. The American Hospital for Special Surgery (HSS) score and Knee Society Score (KSS) were used to compare the function of the knee between pre- and post-revision. The rate of infection control and complication were analyzed. All incisions healed primarily. Re-infection occurred in 2 cases after two-stage revision, and infection was controlled in the other 21 cases, with an infection control rate of 91.3%. The patients were followed up 2-5 years (mean, 3.6 years). The HSS score was increased from 60.6 +/- 9.8 at pre-revision to 82.3 +/- 7.4 at last follow-up, the KSS score was increased from 110.7 +/- 9.6 at pre-revision to 134.0 +/- 10.5 at last follow-up, all showing significant differences (P < 0.01). Radiographs showed that prosthesis had good position with no loosening, fracture, or periprosthetic radiolucent. Two-stage revision using an antibiotic-impregnated articulating cement spacer is an effective method to control infected TKA and to restore the function of affected knee.

  17. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation Models with Mixed Effects.

    PubMed

    Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam

    2016-01-01

    Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.

  18. Determination of Process Parameters in Multi-Stage Hydro-Mechanical Deep Drawing by FE Simulation

    NASA Astrophysics Data System (ADS)

    Kumar, D. Ravi; Manohar, M.

    2017-09-01

    In this work, analysis has been carried to simulate manufacturing of a near hemispherical bottom part with large depth by hydro-mechanical deep drawing with an aim to reduce the number of forming steps and to reduce the extent of thinning in the dome region. Inconel 718 has been considered as the material due to its importance in aerospace industry. It is a Ni-based super alloy and it is one of the most widely used of all super alloys primarily due to large-scale applications in aircraft engines. Using Finite Element Method (FEM), numerical simulations have been carried out for multi-stage hydro-mechanical deep drawing by using the same draw ratios and design parameters as in the case of conventional deep drawing in four stages. The results showed that the minimum thickness in the final part can be increased significantly when compared to conventional deep drawing. It has been found that the part could be deep drawn to the desired height (after trimming at the final stage) without any severe wrinkling. Blank holding force (BHF) and peak counter pressure have been found to have a strong influence on thinning in the component. Decreasing the coefficient of friction has marginally increased the minimum thickness in the final component. By increasing the draw ratio and optimizing BHF, counter pressure and die corner radius in the simulations, it has been found that it is possible to draw the final part in three stages. It has been found that thinning can be further reduced by decreasing the initial blank size without any reduction in the final height. This reduced the draw ratio at every stage and optimum combination of BHF and counter pressure have been found for the 3-stage process also.

  19. A note on the efficiencies of sampling strategies in two-stage Bayesian regional fine mapping of a quantitative trait.

    PubMed

    Chen, Zhijian; Craiu, Radu V; Bull, Shelley B

    2014-11-01

    In focused studies designed to follow up associations detected in a genome-wide association study (GWAS), investigators can proceed to fine-map a genomic region by targeted sequencing or dense genotyping of all variants in the region, aiming to identify a functional sequence variant. For the analysis of a quantitative trait, we consider a Bayesian approach to fine-mapping study design that incorporates stratification according to a promising GWAS tag SNP in the same region. Improved cost-efficiency can be achieved when the fine-mapping phase incorporates a two-stage design, with identification of a smaller set of more promising variants in a subsample taken in stage 1, followed by their evaluation in an independent stage 2 subsample. To avoid the potential negative impact of genetic model misspecification on inference we incorporate genetic model selection based on posterior probabilities for each competing model. Our simulation study shows that, compared to simple random sampling that ignores genetic information from GWAS, tag-SNP-based stratified sample allocation methods reduce the number of variants continuing to stage 2 and are more likely to promote the functional sequence variant into confirmation studies. © 2014 WILEY PERIODICALS, INC.

  20. Conceptual design of a two-stage-to-orbit vehicle

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A conceptual design study of a two-stage-to-orbit vehicle is presented. Three configurations were initially investigated with one configuration selected for further development. The major objective was to place a 20,000-lb payload into a low Earth orbit using a two-stage vehicle. The first stage used air-breathing engines and employed a horizontal takeoff, while the second stage used rocket engines to achieve a 250-n.m. orbit. A two-stage-to-orbit vehicle seems a viable option for the next-generation space shuttle.

  1. Reference intervals and allometric scaling of two-dimensional echocardiographic measurements in 150 healthy cats.

    PubMed

    Karsten, Schober; Stephanie, Savino; Vedat, Yildiz

    2017-11-10

    The objective of the study was to evaluate the effects of body weight (BW), breed, and sex on two-dimensional (2D) echocardiographic measures, reference ranges, and prediction intervals using allometrically-scaled data of left atrial (LA) and left ventricular (LV) size and LV wall thickness in healthy cats. Study type was retrospective, observational, and clinical cohort. 150 healthy cats were enrolled and 2D echocardiograms analyzed. LA diameter, LV wall thickness, and LV dimension were quantified using three different imaging views. The effect of BW, breed, sex, age, and interaction (BW*sex) on echocardiographic variables was assessed using univariate and multivariate regression and linear mixed model analysis. Standard (using raw data) and allometrically scaled (Y=a × M b ) reference intervals and prediction intervals were determined. BW had a significant (P<0.05) independent effect on 2D variables whereas breed, sex, and age did not. There were clinically relevant differences between reference intervals using mean ± 2SD of raw data and mean and 95% prediction interval of allometrically-scaled variables, most prominent in larger (>6 kg) and smaller (<3 kg) cats. A clinically relevant difference between thickness of the interventricular septum (IVS) and dimension of the LV posterior wall (LVPW) was identified. In conclusion, allometric scaling and BW-based 95% prediction intervals should be preferred over conventional 2D echocardiographic reference intervals in cats, in particular in small and large cats. These results are particularly relevant to screening examinations for feline hypertrophic cardiomyopathy.

  2. Two-stage Framework for a Topology-Based Projection and Visualization of Classified Document Collections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oesterling, Patrick; Scheuermann, Gerik; Teresniak, Sven

    During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point cloudsmore » given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.« less

  3. A Concept of Two-Stage-To-Orbit Reusable Launch Vehicle

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Wang, Xiaojun; Tang, Yihua

    2002-01-01

    Reusable Launch Vehicle (RLV) has a capability of delivering a wide rang of payload to earth orbit with greater reliability, lower cost, more flexibility and operability than any of today's launch vehicles. It is the goal of future space transportation systems. Past experience on single stage to orbit (SSTO) RLVs, such as NASA's NASP project, which aims at developing an rocket-based combined-cycle (RBCC) airplane and X-33, which aims at developing a rocket RLV, indicates that SSTO RLV can not be realized in the next few years based on the state-of-the-art technologies. This paper presents a concept of all rocket two-stage-to-orbit (TSTO) reusable launch vehicle. The TSTO RLV comprises an orbiter and a booster stage. The orbiter is mounted on the top of the booster stage. The TSTO RLV takes off vertically. At the altitude about 50km the booster stage is separated from the orbiter, returns and lands by parachutes and airbags, or lands horizontally by means of its own propulsion system. The orbiter continues its ascent flight and delivers the payload into LEO orbit. After completing orbit mission, the orbiter will reenter into the atmosphere, automatically fly to the ground base and finally horizontally land on the runway. TSTO RLV has less technology difficulties and risk than SSTO, and maybe the practical approach to the RLV in the near future.

  4. Automatic sleep stage classification using two facial electrodes.

    PubMed

    Virkkala, Jussi; Velin, Riitta; Himanen, Sari-Leena; Värri, Alpo; Müller, Kiti; Hasan, Joel

    2008-01-01

    Standard sleep stage classification is based on visual analysis of central EEG, EOG and EMG signals. Automatic analysis with a reduced number of sensors has been studied as an easy alternative to the standard. In this study, a single-channel electro-oculography (EOG) algorithm was developed for separation of wakefulness, SREM, light sleep (S1, S2) and slow wave sleep (S3, S4). The algorithm was developed and tested with 296 subjects. Additional validation was performed on 16 subjects using a low weight single-channel Alive Monitor. In the validation study, subjects attached the disposable EOG electrodes themselves at home. In separating the four stages total agreement (and Cohen's Kappa) in the training data set was 74% (0.59), in the testing data set 73% (0.59) and in the validation data set 74% (0.59). Self-applicable electro-oculography with only two facial electrodes was found to provide reasonable sleep stage information.

  5. The impact of secondary-task type on the sensitivity of reaction-time based measurement of cognitive load for novices learning surgical skills using simulation.

    PubMed

    Rojas, David; Haji, Faizal; Shewaga, Rob; Kapralos, Bill; Dubrowski, Adam

    2014-01-01

    Interest in the measurement of cognitive load (CL) in simulation-based education has grown in recent years. In this paper we present two pilot experiments comparing the sensitivity of two reaction time based secondary task measures of CL. The results suggest that simple reaction time measures are sensitive enough to detect changes in CL experienced by novice learners in the initial stages of simulation-based surgical skills training.

  6. Injection blow moulding single stage process: Validation of the numerical simulation through tomography analysis

    NASA Astrophysics Data System (ADS)

    Biglione, Jordan; Béreaux, Yves; Charmeau, Jean-Yves

    2016-10-01

    The injection blow moulding single stage process has been made available on standard injection moulding machine. Both the injection moulding stage and the blow moulding stage are being taken care of in an injection mould. Thus the dimensions of this mould are those of a conventional injection moulding mould. The fact that the two stages are located in the same mould leads to a process more constrained than the conventional one. This process introduces temperature gradients, molecular orientation, high stretch rates and high cooling rates. These constraints lead to a small processing window. In practice, the preform has to remain sufficiently melted to be blown so that the process takes place between the melting temperature and the crystallization temperature. In our numerical approach, the polymer is supposed to be blown in its molten state. Hence we have identified the mechanical behaviour of the polymer in its molten state through dynamical rheology experiments. A viscous Cross model has been proved to be relevant to the problem. Thermal dependence is assumed by an Arrhenius law. The process is simulated through a finite element code (POLYFLOW software) in the Ansys Workbench framework. Thickness measurements using image analysis of tomography data are performed and comparisons with the simulation results show good agreements.

  7. A novel approach based on preference-based index for interval bilevel linear programming problem.

    PubMed

    Ren, Aihong; Wang, Yuping; Xue, Xingsi

    2017-01-01

    This paper proposes a new methodology for solving the interval bilevel linear programming problem in which all coefficients of both objective functions and constraints are considered as interval numbers. In order to keep as much uncertainty of the original constraint region as possible, the original problem is first converted into an interval bilevel programming problem with interval coefficients in both objective functions only through normal variation of interval number and chance-constrained programming. With the consideration of different preferences of different decision makers, the concept of the preference level that the interval objective function is preferred to a target interval is defined based on the preference-based index. Then a preference-based deterministic bilevel programming problem is constructed in terms of the preference level and the order relation [Formula: see text]. Furthermore, the concept of a preference δ -optimal solution is given. Subsequently, the constructed deterministic nonlinear bilevel problem is solved with the help of estimation of distribution algorithm. Finally, several numerical examples are provided to demonstrate the effectiveness of the proposed approach.

  8. Interval Estimation of Seismic Hazard Parameters

    NASA Astrophysics Data System (ADS)

    Orlecka-Sikora, Beata; Lasocki, Stanislaw

    2017-03-01

    The paper considers Poisson temporal occurrence of earthquakes and presents a way to integrate uncertainties of the estimates of mean activity rate and magnitude cumulative distribution function in the interval estimation of the most widely used seismic hazard functions, such as the exceedance probability and the mean return period. The proposed algorithm can be used either when the Gutenberg-Richter model of magnitude distribution is accepted or when the nonparametric estimation is in use. When the Gutenberg-Richter model of magnitude distribution is used the interval estimation of its parameters is based on the asymptotic normality of the maximum likelihood estimator. When the nonparametric kernel estimation of magnitude distribution is used, we propose the iterated bias corrected and accelerated method for interval estimation based on the smoothed bootstrap and second-order bootstrap samples. The changes resulted from the integrated approach in the interval estimation of the seismic hazard functions with respect to the approach, which neglects the uncertainty of the mean activity rate estimates have been studied using Monte Carlo simulations and two real dataset examples. The results indicate that the uncertainty of mean activity rate affects significantly the interval estimates of hazard functions only when the product of activity rate and the time period, for which the hazard is estimated, is no more than 5.0. When this product becomes greater than 5.0, the impact of the uncertainty of cumulative distribution function of magnitude dominates the impact of the uncertainty of mean activity rate in the aggregated uncertainty of the hazard functions. Following, the interval estimates with and without inclusion of the uncertainty of mean activity rate converge. The presented algorithm is generic and can be applied also to capture the propagation of uncertainty of estimates, which are parameters of a multiparameter function, onto this function.

  9. Two stage surgical procedure for root coverage

    PubMed Central

    George, Anjana Mary; Rajesh, K. S.; Hegde, Shashikanth; Kumar, Arun

    2012-01-01

    Gingival recession may present problems that include root sensitivity, esthetic concern, and predilection to root caries, cervical abrasion and compromising of a restorative effort. When marginal tissue health cannot be maintained and recession is deep, the need for treatment arises. This literature has documented that recession can be successfully treated by means of a two stage surgical approach, the first stage consisting of creation of attached gingiva by means of free gingival graft, and in the second stage, a lateral sliding flap of grafted tissue to cover the recession. This indirect technique ensures development of an adequate width of attached gingiva. The outcome of this technique suggests that two stage surgical procedures are highly predictable for root coverage in case of isolated deep recession and lack of attached gingiva. PMID:23162343

  10. Residential Two-Stage Gas Furnaces - Do They Save Energy?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lekov, Alex; Franco, Victor; Lutz, James

    2006-05-12

    Residential two-stage gas furnaces account for almost a quarter of the total number of models listed in the March 2005 GAMA directory of equipment certified for sale in the United States. Two-stage furnaces are expanding their presence in the market mostly because they meet consumer expectations for improved comfort. Currently, the U.S. Department of Energy (DOE) test procedure serves as the method for reporting furnace total fuel and electricity consumption under laboratory conditions. In 2006, American Society of Heating Refrigeration and Air-conditioning Engineers (ASHRAE) proposed an update to its test procedure which corrects some of the discrepancies found in themore » DOE test procedure and provides an improved methodology for calculating the energy consumption of two-stage furnaces. The objectives of this paper are to explore the differences in the methods for calculating two-stage residential gas furnace energy consumption in the DOE test procedure and in the 2006 ASHRAE test procedure and to compare test results to research results from field tests. Overall, the DOE test procedure shows a reduction in the total site energy consumption of about 3 percent for two-stage compared to single-stage furnaces at the same efficiency level. In contrast, the 2006 ASHRAE test procedure shows almost no difference in the total site energy consumption. The 2006 ASHRAE test procedure appears to provide a better methodology for calculating the energy consumption of two-stage furnaces. The results indicate that, although two-stage technology by itself does not save site energy, the combination of two-stage furnaces with BPM motors provides electricity savings, which are confirmed by field studies.« less

  11. Re-Infection Outcomes following One- and Two-Stage Surgical Revision of Infected Hip Prosthesis: A Systematic Review and Meta-Analysis

    PubMed Central

    Kunutsor, Setor K.; Whitehouse, Michael R.; Blom, Ashley W.; Beswick, Andrew D.

    2015-01-01

    Background The two-stage revision strategy has been claimed as being the “gold standard” for treating prosthetic joint infection. The one-stage revision strategy remains an attractive alternative option; however, its effectiveness in comparison to the two-stage strategy remains uncertain. Objective To compare the effectiveness of one- and two-stage revision strategies in treating prosthetic hip infection, using re-infection as an outcome. Design Systematic review and meta-analysis. Data Sources MEDLINE, EMBASE, Web of Science, Cochrane Library, manual search of bibliographies to March 2015, and email contact with investigators. Study Selection Cohort studies (prospective or retrospective) conducted in generally unselected patients with prosthetic hip infection treated exclusively by one- or two-stage revision and with re-infection outcomes reported within two years of revision. No clinical trials were identified. Review Methods Data were extracted by two independent investigators and a consensus was reached with involvement of a third. Rates of re-infection from 38 one-stage studies (2,536 participants) and 60 two-stage studies (3,288 participants) were aggregated using random-effect models after arcsine transformation, and were grouped by study and population level characteristics. Results In one-stage studies, the rate (95% confidence intervals) of re-infection was 8.2% (6.0–10.8). The corresponding re-infection rate after two-stage revision was 7.9% (6.2–9.7). Re-infection rates remained generally similar when grouped by several study and population level characteristics. There was no strong evidence of publication bias among contributing studies. Conclusion Evidence from aggregate published data suggest similar re-infection rates after one- or two-stage revision among unselected patients. More detailed analyses under a broader range of circumstances and exploration of other sources of heterogeneity will require collaborative pooling of individual

  12. Frequency analysis of a two-stage planetary gearbox using two different methodologies

    NASA Astrophysics Data System (ADS)

    Feki, Nabih; Karray, Maha; Khabou, Mohamed Tawfik; Chaari, Fakher; Haddar, Mohamed

    2017-12-01

    This paper is focused on the characterization of the frequency content of vibration signals issued from a two-stage planetary gearbox. To achieve this goal, two different methodologies are adopted: the lumped-parameter modeling approach and the phenomenological modeling approach. The two methodologies aim to describe the complex vibrations generated by a two-stage planetary gearbox. The phenomenological model describes directly the vibrations as measured by a sensor fixed outside the fixed ring gear with respect to an inertial reference frame, while results from a lumped-parameter model are referenced with respect to a rotating frame and then transferred into an inertial reference frame. Two different case studies of the two-stage planetary gear are adopted to describe the vibration and the corresponding spectra using both models. Each case presents a specific geometry and a specific spectral structure.

  13. Laparoscopic staging for apparent stage I epithelial ovarian cancer.

    PubMed

    Melamed, Alexander; Keating, Nancy L; Clemmer, Joel T; Bregar, Amy J; Wright, Jason D; Boruta, David M; Schorge, John O; Del Carmen, Marcela G; Rauh-Hain, J Alejandro

    2017-01-01

    Whereas advances in minimally invasive surgery have made laparoscopic staging technically feasible in stage I epithelial ovarian cancer, the practice remains controversial because of an absence of randomized trials and lack of high-quality observational studies demonstrating equivalent outcomes. This study seeks to evaluate the association of laparoscopic staging with survival among women with clinical stage I epithelial ovarian cancer. We used the National Cancer Data Base to identify all women who underwent surgical staging for clinical stage I epithelial ovarian cancer diagnosed from 2010 through 2012. The exposure of interest was planned surgical approach (laparoscopy vs laparotomy), and the primary outcome was overall survival. The primary analysis was based on an intention to treat: all women whose procedures were initiated laparoscopically were categorized as having had a planned laparoscopic procedure, regardless of subsequent conversion to laparotomy. We used propensity methods to match patients who underwent planned laparoscopic staging with similar patients who underwent planned laparotomy based on observed characteristics. We compared survival among the matched cohorts using the Kaplan-Meier method and Cox regression. We compared the extent of lymphadenectomy using the Wilcoxon rank-sum test. Among 4798 eligible patients, 1112 (23.2%) underwent procedures that were initiated laparoscopically, of which 190 (17%) were converted to laparotomy. Women who underwent planned laparoscopy were more frequently white, privately insured, from wealthier ZIP codes, received care in community cancer centers, and had smaller tumors that were more frequently of serous and less often of mucinous histology than those who underwent staging via planned laparotomy. After propensity score matching, time to death did not differ between patients undergoing planned laparoscopic vs open staging (hazard ratio, 0.77, 95% confidence interval, 0.54-1.09; P = .13). Planned

  14. LP01 to LP0m mode converters using all-fiber two-stage tapers

    NASA Astrophysics Data System (ADS)

    Mellah, Hakim; Zhang, Xiupu; Shen, Dongya

    2015-11-01

    A mode converter between LP01 and LP0m modes is proposed using two stages of tapers. The first stage is formed by an adiabatically tapering a circular fiber to excite the desirable LP0m mode. The second stage is formed by inserting an inner core (tapered from both sides) with a refractive index smaller than the original core. This second stage is used to obtain low insertion loss and high extinction ratio of the desired LP0m mode. Three converters between LP01 and LP0m, m=2, 3, and 4, are designed for C-band, and simulation results show that less than 0.24, 0.54 and 0.7 dB insertion loss and higher than 15, 16, and 17.5 dB extinction ratio over the entire band were obtained for the three converters, respectively.

  15. Two-stage high frequency pulse tube cooler for refrigeration at 25 K

    NASA Astrophysics Data System (ADS)

    Dietrich, M.; Thummes, G.

    2010-04-01

    A two-stage Stirling-type U-shape pulse tube cryocooler driven by a 10 kW-class linear compressor was designed, built and tested. A special feature of the cold head is the absence of a heat exchanger at the cold end of the first-stage, since the intended application requires no cooling power at this intermediate temperature. Simulations where done using SAGE-software to find optimum operating conditions and cold head geometry. Flow-impedance matching was required to connect the compressor designed for 60 Hz operation to the 40 Hz cold head. A cooling power of 12.9 W at 25 K with an electrical input power of 4.6 kW has been achieved up to now. The lowest temperature reached is 13.7 K.

  16. Hybrid maize breeding with doubled haploids: I. One-stage versus two-stage selection for testcross performance.

    PubMed

    Longin, C Friedrich H; Utz, H Friedrich; Reif, Jochen C; Schipprack, Wolfgang; Melchinger, Albrecht E

    2006-03-01

    Optimum allocation of resources is of fundamental importance for the efficiency of breeding programs. The objectives of our study were to (1) determine the optimum allocation for the number of lines and test locations in hybrid maize breeding with doubled haploids (DHs) regarding two optimization criteria, the selection gain deltaG(k) and the probability P(k) of identifying superior genotypes, (2) compare both optimization criteria including their standard deviations (SDs), and (3) investigate the influence of production costs of DHs on the optimum allocation. For different budgets, number of finally selected lines, ratios of variance components, and production costs of DHs, the optimum allocation of test resources under one- and two-stage selection for testcross performance with a given tester was determined by using Monte Carlo simulations. In one-stage selection, lines are tested in field trials in a single year. In two-stage selection, optimum allocation of resources involves evaluation of (1) a large number of lines in a small number of test locations in the first year and (2) a small number of the selected superior lines in a large number of test locations in the second year, thereby maximizing both optimization criteria. Furthermore, to have a realistic chance of identifying a superior genotype, the probability P(k) of identifying superior genotypes should be greater than 75%. For budgets between 200 and 5,000 field plot equivalents, P(k) > 75% was reached only for genotypes belonging to the best 5% of the population. As the optimum allocation for P(k)(5%) was similar to that for deltaG(k), the choice of the optimization criterion was not crucial. The production costs of DHs had only a minor effect on the optimum number of locations and on values of the optimization criteria.

  17. Potential of extended airbreathing operation of a two-stage launch vehicle by scramjet propulsion

    NASA Astrophysics Data System (ADS)

    Schoettle, U. M.; Hillesheimer, M.; Rahn, M.

    This paper examines the application of scramjet propulsion to extend the ramjet operation of an airbreathing two-stage launch designed for horizontal takeoff and landing. Performance comparisons are made for two alternative propulsion concepts. The mission performance predictions presented are obtained from a multistep optimization procedure employing both trajectory optimization and vehicle design steps to achieve maximum payload capabilities. The simulation results are shown to offer an attractive payload advantage of the scramjet variant over the ramjet powered vehicle.

  18. Confidence intervals for correlations when data are not normal.

    PubMed

    Bishara, Anthony J; Hittner, James B

    2017-02-01

    With nonnormal data, the typical confidence interval of the correlation (Fisher z') may be inaccurate. The literature has been unclear as to which of several alternative methods should be used instead, and how extreme a violation of normality is needed to justify an alternative. Through Monte Carlo simulation, 11 confidence interval methods were compared, including Fisher z', two Spearman rank-order methods, the Box-Cox transformation, rank-based inverse normal (RIN) transformation, and various bootstrap methods. Nonnormality often distorted the Fisher z' confidence interval-for example, leading to a 95 % confidence interval that had actual coverage as low as 68 %. Increasing the sample size sometimes worsened this problem. Inaccurate Fisher z' intervals could be predicted by a sample kurtosis of at least 2, an absolute sample skewness of at least 1, or significant violations of normality hypothesis tests. Only the Spearman rank-order and RIN transformation methods were universally robust to nonnormality. Among the bootstrap methods, an observed imposed bootstrap came closest to accurate coverage, though it often resulted in an overly long interval. The results suggest that sample nonnormality can justify avoidance of the Fisher z' interval in favor of a more robust alternative. R code for the relevant methods is provided in supplementary materials.

  19. A novel hybrid actuation mechanism based XY nanopositioning stage with totally decoupled kinematics

    NASA Astrophysics Data System (ADS)

    Zhu, Wu-Le; Zhu, Zhiwei; Guo, Ping; Ju, Bing-Feng

    2018-01-01

    This paper reports the design, analysis and testing of a parallel two degree-of-freedom piezo-actuated compliant stage for XY nanopositioning by introducing an innovative hybrid actuation mechanism. It mainly features the combination of two Scott-Russell and a half-bridge mechanisms for double-stage displacement amplification as well as moving direction modulation. By adopting the leaf-type double parallelogram (LTDP) structures at both input and output ends of the hybrid mechanism, the lateral stiffness and dynamic characteristics are significantly improved while the parasitic motions are greatly eliminated. The XY nanopositioning stage is constructed with two orthogonally configured hybrid mechanisms along with the LTDP mechanisms for totally decoupled kinematics at both input and output ends. An analytical model was established to describe the complete elastic deformation behavior of the stage, with further verification through the finite element simulation. Finally, experiments were implemented to comprehensively evaluate both the static and dynamic performances of the proposed stage. Closed-loop control of the piezoelectric actuators (PEA) by integrating strain gauges was also conducted to effectively eliminate the nonlinear hysteresis of the stage.

  20. A method for analyzing clustered interval-censored data based on Cox's model.

    PubMed

    Kor, Chew-Teng; Cheng, Kuang-Fu; Chen, Yi-Hau

    2013-02-28

    Methods for analyzing interval-censored data are well established. Unfortunately, these methods are inappropriate for the studies with correlated data. In this paper, we focus on developing a method for analyzing clustered interval-censored data. Our method is based on Cox's proportional hazard model with piecewise-constant baseline hazard function. The correlation structure of the data can be modeled by using Clayton's copula or independence model with proper adjustment in the covariance estimation. We establish estimating equations for the regression parameters and baseline hazards (and a parameter in copula) simultaneously. Simulation results confirm that the point estimators follow a multivariate normal distribution, and our proposed variance estimations are reliable. In particular, we found that the approach with independence model worked well even when the true correlation model was derived from Clayton's copula. We applied our method to a family-based cohort study of pandemic H1N1 influenza in Taiwan during 2009-2010. Using the proposed method, we investigate the impact of vaccination and family contacts on the incidence of pH1N1 influenza. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Overview of the Beta II Two-Stage-To-Orbit vehicle design

    NASA Technical Reports Server (NTRS)

    Plencner, Robert M.

    1991-01-01

    A study of a near-term, low risk two-stage-to-orbit (TSTO) vehicle was undertaken. The goal of the study was to assess a fully reusable TSTO vehicle with horizontal takeoff and landing capability that could deliver 10,000 pounds to a 120 nm polar orbit. The configuration analysis was based on the Beta vehicle design. A cooperative study was performed to redesign and refine the Beta concept to meet the mission requirements. The vehicle resulting from this study was named Beta II. It has an all-airbreathing first stage and a staging Mach number of 6.5. The second stage is a conventional wing-body configuration with a single SSME.

  2. Enzymatic synthesis of extra virgin olive oil based infant formula fat analogues containing ARA and DHA: one-stage and two-stage syntheses.

    PubMed

    Pande, Garima; Sabir, Jamal S M; Baeshen, Nabih A; Akoh, Casimir C

    2013-11-06

    Structured lipids (SLs) with high palmitic acid content at the sn-2 position enriched with arachidonic acid (ARA) and docosahexaenoic acid (DHA) were produced using extra virgin olive oil, tripalmitin, ARA and DHA single cell oil free fatty acids. Four types of SLs were synthesized using immobilized lipases, Novozym 435 and Lipozyme TL IM, based on one-stage (one-pot) and two-stage (sequential) syntheses. The SLs were characterized for fatty acid profile, triacylglycerol (TAG) molecular species, melting and crystallization profiles, tocopherols, and phenolic compounds. All the SLs had >50 mol % palmitic acid at the sn-2 position. The predominant TAGs in all SLs were PPO and OPO. The total tocopherol content of SL1-1, SL1-2, SL2-1, and SL2-2 were 70.46, 68.79, 79.64, and 79.31 μg/g, respectively. SL1-2 had the highest melting completion (42.0 °C) and crystallization onset (27.6 °C) temperatures. All the SLs produced in this study may be suitable as infant formula fat analogues.

  3. Two-stage commercial evaluation of engineering systems production projects for high-rise buildings

    NASA Astrophysics Data System (ADS)

    Bril, Aleksander; Kalinina, Olga; Levina, Anastasia

    2018-03-01

    The paper is devoted to the current and debatable problem of methodology of choosing the effective innovative enterprises for venture financing. A two-stage system of commercial innovation evaluation based on the UNIDO methodology is proposed. Engineering systems account for 25 to 40% of the cost of high-rise residential buildings. This proportion increases with the use of new construction technologies. Analysis of the construction market in Russia showed that the production of internal engineering systems elements based on innovative technologies has a growth trend. The production of simple elements is organized in small enterprises on the basis of new technologies. The most attractive for development is the use of venture financing of small innovative business. To improve the efficiency of these operations, the paper proposes a methodology for a two-stage evaluation of small business development projects. A two-stage system of commercial evaluation of innovative projects allows creating an information base for informed and coordinated decision-making on venture financing of enterprises that produce engineering systems elements for the construction business.

  4. A two-stage approach to removing noise from recorded music

    NASA Astrophysics Data System (ADS)

    Berger, Jonathan; Goldberg, Maxim J.; Coifman, Ronald C.; Goldberg, Maxim J.; Coifman, Ronald C.

    2004-05-01

    A two-stage algorithm for removing noise from recorded music signals (first proposed in Berger et al., ICMC, 1995) is described and updated. The first stage selects the ``best'' local trigonometric basis for the signal and models noise as the part having high entropy [see Berger et al., J. Audio Eng. Soc. 42(10), 808-818 (1994)]. In the second stage, the original source and the model of the noise obtained from the first stage are expanded into dyadic trees of smooth local sine bases. The best basis for the source signal is extracted using a relative entropy function (the Kullback-Leibler distance) to compare the sum of the costs of the children nodes to the cost of their parent node; energies of the noise in corresponding nodes of the model noise tree are used as weights. The talk will include audio examples of various stages of the method and proposals for further research.

  5. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap.

    PubMed

    Zhou, Hanzhi; Elliott, Michael R; Raghunathan, Trivellore E

    2016-06-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in "Delta-V," a key crash severity measure.

  6. Multiple Imputation in Two-Stage Cluster Samples Using The Weighted Finite Population Bayesian Bootstrap

    PubMed Central

    Zhou, Hanzhi; Elliott, Michael R.; Raghunathan, Trivellore E.

    2017-01-01

    Multistage sampling is often employed in survey samples for cost and convenience. However, accounting for clustering features when generating datasets for multiple imputation is a nontrivial task, particularly when, as is often the case, cluster sampling is accompanied by unequal probabilities of selection, necessitating case weights. Thus, multiple imputation often ignores complex sample designs and assumes simple random sampling when generating imputations, even though failing to account for complex sample design features is known to yield biased estimates and confidence intervals that have incorrect nominal coverage. In this article, we extend a recently developed, weighted, finite-population Bayesian bootstrap procedure to generate synthetic populations conditional on complex sample design data that can be treated as simple random samples at the imputation stage, obviating the need to directly model design features for imputation. We develop two forms of this method: one where the probabilities of selection are known at the first and second stages of the design, and the other, more common in public use files, where only the final weight based on the product of the two probabilities is known. We show that this method has advantages in terms of bias, mean square error, and coverage properties over methods where sample designs are ignored, with little loss in efficiency, even when compared with correct fully parametric models. An application is made using the National Automotive Sampling System Crashworthiness Data System, a multistage, unequal probability sample of U.S. passenger vehicle crashes, which suffers from a substantial amount of missing data in “Delta-V,” a key crash severity measure. PMID:29226161

  7. Confidence Intervals from Realizations of Simulated Nuclear Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Younes, W.; Ratkiewicz, A.; Ressler, J. J.

    2017-09-28

    Various statistical techniques are discussed that can be used to assign a level of confidence in the prediction of models that depend on input data with known uncertainties and correlations. The particular techniques reviewed in this paper are: 1) random realizations of the input data using Monte-Carlo methods, 2) the construction of confidence intervals to assess the reliability of model predictions, and 3) resampling techniques to impose statistical constraints on the input data based on additional information. These techniques are illustrated with a calculation of the keff value, based on the 235U(n, f) and 239Pu (n, f) cross sections.

  8. Simulative design and process optimization of the two-stage stretch-blow molding process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-05-22

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development timemore » and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.« less

  9. Simulative design and process optimization of the two-stage stretch-blow molding process

    NASA Astrophysics Data System (ADS)

    Hopmann, Ch.; Rasche, S.; Windeck, C.

    2015-05-01

    The total production costs of PET bottles are significantly affected by the costs of raw material. Approximately 70 % of the total costs are spent for the raw material. Therefore, stretch-blow molding industry intends to reduce the total production costs by an optimized material efficiency. However, there is often a trade-off between an optimized material efficiency and required product properties. Due to a multitude of complex boundary conditions, the design process of new stretch-blow molded products is still a challenging task and is often based on empirical knowledge. Application of current CAE-tools supports the design process by reducing development time and costs. This paper describes an approach to determine optimized preform geometry and corresponding process parameters iteratively. The wall thickness distribution and the local stretch ratios of the blown bottle are calculated in a three-dimensional process simulation. Thereby, the wall thickness distribution is correlated with an objective function and preform geometry as well as process parameters are varied by an optimization algorithm. Taking into account the correlation between material usage, process history and resulting product properties, integrative coupled simulation steps, e.g. structural analyses or barrier simulations, are performed. The approach is applied on a 0.5 liter PET bottle of Krones AG, Neutraubling, Germany. The investigations point out that the design process can be supported by applying this simulative optimization approach. In an optimization study the total bottle weight is reduced from 18.5 g to 15.5 g. The validation of the computed results is in progress.

  10. A Comparison of Two-Stage Approaches for Fitting Nonlinear Ordinary Differential Equation (ODE) Models with Mixed Effects

    PubMed Central

    Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam

    2016-01-01

    Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255

  11. A two-stage Monte Carlo approach to the expression of uncertainty with finite sample sizes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crowder, Stephen Vernon; Moyer, Robert D.

    2005-05-01

    Proposed supplement I to the GUM outlines a 'propagation of distributions' approach to deriving the distribution of a measurand for any non-linear function and for any set of random inputs. The supplement's proposed Monte Carlo approach assumes that the distributions of the random inputs are known exactly. This implies that the sample sizes are effectively infinite. In this case, the mean of the measurand can be determined precisely using a large number of Monte Carlo simulations. In practice, however, the distributions of the inputs will rarely be known exactly, but must be estimated using possibly small samples. If these approximatedmore » distributions are treated as exact, the uncertainty in estimating the mean is not properly taken into account. In this paper, we propose a two-stage Monte Carlo procedure that explicitly takes into account the finite sample sizes used to estimate parameters of the input distributions. We will illustrate the approach with a case study involving the efficiency of a thermistor mount power sensor. The performance of the proposed approach will be compared to the standard GUM approach for finite samples using simple non-linear measurement equations. We will investigate performance in terms of coverage probabilities of derived confidence intervals.« less

  12. Two-stage energy storage equalization system for lithium-ion battery pack

    NASA Astrophysics Data System (ADS)

    Chen, W.; Yang, Z. X.; Dong, G. Q.; Li, Y. B.; He, Q. Y.

    2017-11-01

    How to raise the efficiency of energy storage and maximize storage capacity is a core problem in current energy storage management. For that, two-stage energy storage equalization system which contains two-stage equalization topology and control strategy based on a symmetric multi-winding transformer and DC-DC (direct current-direct current) converter is proposed with bidirectional active equalization theory, in order to realize the objectives of consistent lithium-ion battery packs voltages and cells voltages inside packs by using a method of the Range. Modeling analysis demonstrates that the voltage dispersion of lithium-ion battery packs and cells inside packs can be kept within 2 percent during charging and discharging. Equalization time was 0.5 ms, which shortened equalization time of 33.3 percent compared with DC-DC converter. Therefore, the proposed two-stage lithium-ion battery equalization system can achieve maximum storage capacity between lithium-ion battery packs and cells inside packs, meanwhile efficiency of energy storage is significantly improved.

  13. Two-Speed Gearbox Dynamic Simulation Predictions and Test Validation

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; DeSmidt, Hans; Smith, Edward C.; Bauman, Steven W.

    2010-01-01

    Dynamic simulations and experimental validation tests were performed on a two-stage, two-speed gearbox as part of the drive system research activities of the NASA Fundamental Aeronautics Subsonics Rotary Wing Project. The gearbox was driven by two electromagnetic motors and had two electromagnetic, multi-disk clutches to control output speed. A dynamic model of the system was created which included a direct current electric motor with proportional-integral-derivative (PID) speed control, a two-speed gearbox with dual electromagnetically actuated clutches, and an eddy current dynamometer. A six degree-of-freedom model of the gearbox accounted for the system torsional dynamics and included gear, clutch, shaft, and load inertias as well as shaft flexibilities and a dry clutch stick-slip friction model. Experimental validation tests were performed on the gearbox in the NASA Glenn gear noise test facility. Gearbox output speed and torque as well as drive motor speed and current were compared to those from the analytical predictions. The experiments correlate very well with the predictions, thus validating the dynamic simulation methodologies.

  14. Maximum likelihood estimation of signal detection model parameters for the assessment of two-stage diagnostic strategies.

    PubMed

    Lirio, R B; Dondériz, I C; Pérez Abalo, M C

    1992-08-01

    The methodology of Receiver Operating Characteristic curves based on the signal detection model is extended to evaluate the accuracy of two-stage diagnostic strategies. A computer program is developed for the maximum likelihood estimation of parameters that characterize the sensitivity and specificity of two-stage classifiers according to this extended methodology. Its use is briefly illustrated with data collected in a two-stage screening for auditory defects.

  15. Revision of Infected Total Knee Arthroplasty: Two-Stage Reimplantation Using an Antibiotic-Impregnated Static Spacer

    PubMed Central

    Almeida, Fernando; Renovell, Pablo; Morante, Elena; López, Raúl

    2013-01-01

    Background A two-stage revision remains as the "gold standard" treatment for chronically infected total knee arthroplasties. Methods Forty-five septic knee prostheses were revised with a minimum follow-up of 5 years. Static antibiotic-impregnated cement spacers were used in all cases. Intravenous antibiotics according to sensitivity test of the culture were applied during patients' hospital stay. Oral antibiotics were given for another 5 weeks. Second-stage surgery was undertaken after control of infection with normal erythrocyte sedimentation rate and C-reactive protein values. Extensile techniques were used if needed and metallic augments were employed for bone loss in 32 femoral and 29 tibial revisions. Results The average interval between the first-stage resection and reimplantation was 4.4 months. Significant improvement was obtained with respect to visual analog scale pain and clinical and functional scores, and infection was eradicated in 95.6% of cases following a two-stage revision total knee arthroplasty. Radiographic evaluation showed suitable alignment without signs of mechanical loosening. Conclusions This technique is a reasonable procedure to eradicate chronic infection in knee arthroplasty and provides proper functional and clinical results. However, it sometimes requires extensile surgical approaches that could imply arduous surgeries. Metallic augments with cementless stems available in most of the knee revision systems are a suitable alternative to handle bone deficiencies, avoiding the use of bone allografts with its complications. PMID:24009903

  16. Revision of infected total knee arthroplasty: two-stage reimplantation using an antibiotic-impregnated static spacer.

    PubMed

    Silvestre, Antonio; Almeida, Fernando; Renovell, Pablo; Morante, Elena; López, Raúl

    2013-09-01

    A two-stage revision remains as the "gold standard" treatment for chronically infected total knee arthroplasties. Forty-five septic knee prostheses were revised with a minimum follow-up of 5 years. Static antibiotic-impregnated cement spacers were used in all cases. Intravenous antibiotics according to sensitivity test of the culture were applied during patients' hospital stay. Oral antibiotics were given for another 5 weeks. Second-stage surgery was undertaken after control of infection with normal erythrocyte sedimentation rate and C-reactive protein values. Extensile techniques were used if needed and metallic augments were employed for bone loss in 32 femoral and 29 tibial revisions. The average interval between the first-stage resection and reimplantation was 4.4 months. Significant improvement was obtained with respect to visual analog scale pain and clinical and functional scores, and infection was eradicated in 95.6% of cases following a two-stage revision total knee arthroplasty. Radiographic evaluation showed suitable alignment without signs of mechanical loosening. This technique is a reasonable procedure to eradicate chronic infection in knee arthroplasty and provides proper functional and clinical results. However, it sometimes requires extensile surgical approaches that could imply arduous surgeries. Metallic augments with cementless stems available in most of the knee revision systems are a suitable alternative to handle bone deficiencies, avoiding the use of bone allografts with its complications.

  17. Robust Frequency-Domain Constrained Feedback Design via a Two-Stage Heuristic Approach.

    PubMed

    Li, Xianwei; Gao, Huijun

    2015-10-01

    Based on a two-stage heuristic method, this paper is concerned with the design of robust feedback controllers with restricted frequency-domain specifications (RFDSs) for uncertain linear discrete-time systems. Polytopic uncertainties are assumed to enter all the system matrices, while RFDSs are motivated by the fact that practical design specifications are often described in restricted finite frequency ranges. Dilated multipliers are first introduced to relax the generalized Kalman-Yakubovich-Popov lemma for output feedback controller synthesis and robust performance analysis. Then a two-stage approach to output feedback controller synthesis is proposed: at the first stage, a robust full-information (FI) controller is designed, which is used to construct a required output feedback controller at the second stage. To improve the solvability of the synthesis method, heuristic iterative algorithms are further formulated for exploring the feedback gain and optimizing the initial FI controller at the individual stage. The effectiveness of the proposed design method is finally demonstrated by the application to active control of suspension systems.

  18. Results of a space shuttle pulme impingement investigation at stage separation in the NASA-MSFC impulse base flow facility

    NASA Technical Reports Server (NTRS)

    Mccanna, R. W.; Sims, W. H.

    1972-01-01

    Results are presented for an experimental space shuttle stage separation plume impingement program conducted in the NASA-Marshall Space Flight Center's impulse base flow facility (IBFF). Major objectives of the investigation were to: (1)determine the degree of dual engine exhaust plume simulation obtained using the equivalent engine; (2) determine the applicability of the analytical techniques; and (3) obtain data applicable for use in full-scale studies. The IBFF tests determined the orbiter rocket motor plume impingement loads, both pressure and heating, on a 3 percent General Dynamics B-15B booster configuration in a quiescent environment simulating a nominal staging altitude of 73.2 km (240,00 ft). The data included plume surveys of two 3 percent scale orbiter nozzles, and a 4.242 percent scaled equivalent nozzle - equivalent in the sense that it was designed to have the same nozzle-throat-to-area ratio as the two 3 percent nozzles and, within the tolerances assigned for machining the hardware, this was accomplished.

  19. Two-temperature model in molecular dynamics simulations of cascades in Ni-based alloys

    DOE PAGES

    Zarkadoula, Eva; Samolyuk, German; Weber, William J.

    2017-01-03

    In high-energy irradiation events, energy from the fast moving ion is transferred to the system via nuclear and electronic energy loss mechanisms. The nuclear energy loss results in the creation of point defects and clusters, while the energy transferred to the electrons results in the creation of high electronic temperatures, which can affect the damage evolution. In this paper, we perform molecular dynamics simulations of 30 keV and 50 keV Ni ion cascades in nickel-based alloys without and with the electronic effects taken into account. We compare the results of classical molecular dynamics (MD) simulations, where the electronic effects aremore » ignored, with results from simulations that include the electronic stopping only, as well as simulations where both the electronic stopping and the electron-phonon coupling are incorporated, as described by the two temperature model (2T-MD). Finally, our results indicate that the 2T-MD leads to a smaller amount of damage, more isolated defects and smaller defect clusters.« less

  20. Simulations of Continuous Descent Operations with Arrival-management Automation and Mixed Flight-deck Interval Management Equipage

    NASA Technical Reports Server (NTRS)

    Callantine, Todd J.; Kupfer, Michael; Martin, Lynne Hazel; Prevot, Thomas

    2013-01-01

    Air traffic management simulations conducted in the Airspace Operations Laboratory at NASA Ames Research Center have addressed the integration of trajectory-based arrival-management automation, controller tools, and Flight-Deck Interval Management avionics to enable Continuous Descent Operations (CDOs) during periods of sustained high traffic demand. The simulations are devoted to maturing the integrated system for field demonstration, and refining the controller tools, clearance phraseology, and procedures specified in the associated concept of operations. The results indicate a variety of factors impact the concept's safety and viability from a controller's perspective, including en-route preconditioning of arrival flows, useable clearance phraseology, and the characteristics of airspace, routes, and traffic-management methods in use at a particular site. Clear understanding of automation behavior and required shifts in roles and responsibilities is important for controller acceptance and realizing potential benefits. This paper discusses the simulations, drawing parallels with results from related European efforts. The most recent study found en-route controllers can effectively precondition arrival flows, which significantly improved route conformance during CDOs. Controllers found the tools acceptable, in line with previous studies.

  1. Two-stage sequential sampling: A neighborhood-free adaptive sampling procedure

    USGS Publications Warehouse

    Salehi, M.; Smith, D.R.

    2005-01-01

    Designing an efficient sampling scheme for a rare and clustered population is a challenging area of research. Adaptive cluster sampling, which has been shown to be viable for such a population, is based on sampling a neighborhood of units around a unit that meets a specified condition. However, the edge units produced by sampling neighborhoods have proven to limit the efficiency and applicability of adaptive cluster sampling. We propose a sampling design that is adaptive in the sense that the final sample depends on observed values, but it avoids the use of neighborhoods and the sampling of edge units. Unbiased estimators of population total and its variance are derived using Murthy's estimator. The modified two-stage sampling design is easy to implement and can be applied to a wider range of populations than adaptive cluster sampling. We evaluate the proposed sampling design by simulating sampling of two real biological populations and an artificial population for which the variable of interest took the value either 0 or 1 (e.g., indicating presence and absence of a rare event). We show that the proposed sampling design is more efficient than conventional sampling in nearly all cases. The approach used to derive estimators (Murthy's estimator) opens the door for unbiased estimators to be found for similar sequential sampling designs. ?? 2005 American Statistical Association and the International Biometric Society.

  2. Two-stage revisions for culture-negative infected total knee arthroplasties: A five-year outcome in comparison with one-stage and two-stage revisions for culture-positive cases.

    PubMed

    Li, Heng; Ni, Ming; Li, Xiang; Zhang, Qingmeng; Li, Xin; Chen, Jiying

    2017-03-01

    Culture-negative periprosthetic joint infection (PJI) is very intractable when dealing with an infected total knee arthroplasty (TKA) patient. Two-stage revision has been proved to be a reliable solution for PJI patients. Whether it is still credible for culture-negative infected patients remains uncertain. Our group retrospectively reviewed all total knee revision patients from January 2003 to January 2014, 145 PJI patients were diagnosed as infection with the PJI diagnostic criteria and 129 patients were successfully followed. As different treating strategies were utilized, these patients were divided into culture-negative (18 cases, CN) group, culture-positive with one-stage revision group (CP1, 21 cases) and culture-positive with two-stage revision group (CP2, 87 cases) groups. The CN group and CP2 group underwent two-stage revision with antibiotic loaded cement spacers and intravenous antibiotics, CP1 group received one-stage revision. All the culture results and relevant medical records were thoroughly reviewed. The mean follow-up time was 59.5 ± 32.3 months (range 12-158 months). The culture-negative rate was 14.2%. The overall infection control rate was 92.12%. Infection recurrence was observed in two cases in CP1 group (9.09%), six cases in CP2 group (6.90%) and two cases in CN group (11.1%). The reinfection rate of culture-negative patients and culture-positive patients was 7.34% and 11.1% with no significant difference (p = 0.94). No statistically difference was observed between CP2 group and CN group (p = 0.90). No Spacer fracture or dislocation was observed. With combined or broad spectrum antibiotics, two-stage revision showed comparable outcome when treating culture-negative infected TKA patients at five-year follow-up. Copyright © 2016. Published by Elsevier B.V.

  3. Evidence of two-stage melting of Wigner solids

    NASA Astrophysics Data System (ADS)

    Knighton, Talbot; Wu, Zhe; Huang, Jian; Serafin, Alessandro; Xia, J. S.; Pfeiffer, L. N.; West, K. W.

    2018-02-01

    Ultralow carrier concentrations of two-dimensional holes down to p =1 ×109cm-2 are realized. Remarkable insulating states are found below a critical density of pc=4 ×109cm-2 or rs≈40 . Sensitive dc V-I measurement as a function of temperature and electric field reveals a two-stage phase transition supporting the melting of a Wigner solid as a two-stage first-order transition.

  4. Adaptive adjustment of interval predictive control based on combined model and application in shell brand petroleum distillation tower

    NASA Astrophysics Data System (ADS)

    Sun, Chao; Zhang, Chunran; Gu, Xinfeng; Liu, Bin

    2017-10-01

    Constraints of the optimization objective are often unable to be met when predictive control is applied to industrial production process. Then, online predictive controller will not find a feasible solution or a global optimal solution. To solve this problem, based on Back Propagation-Auto Regressive with exogenous inputs (BP-ARX) combined control model, nonlinear programming method is used to discuss the feasibility of constrained predictive control, feasibility decision theorem of the optimization objective is proposed, and the solution method of soft constraint slack variables is given when the optimization objective is not feasible. Based on this, for the interval control requirements of the controlled variables, the slack variables that have been solved are introduced, the adaptive weighted interval predictive control algorithm is proposed, achieving adaptive regulation of the optimization objective and automatically adjust of the infeasible interval range, expanding the scope of the feasible region, and ensuring the feasibility of the interval optimization objective. Finally, feasibility and effectiveness of the algorithm is validated through the simulation comparative experiments.

  5. Simulation-based training for thoracoscopic lobectomy: a randomized controlled trial: virtual-reality versus black-box simulation.

    PubMed

    Jensen, Katrine; Ringsted, Charlotte; Hansen, Henrik Jessen; Petersen, René Horsleben; Konge, Lars

    2014-06-01

    Video-assisted thoracic surgery is gradually replacing conventional open thoracotomy as the method of choice for the treatment of early-stage non-small cell lung cancers, and thoracic surgical trainees must learn and master this technique. Simulation-based training could help trainees overcome the first part of the learning curve, but no virtual-reality simulators for thoracoscopy are commercially available. This study aimed to investigate whether training on a laparoscopic simulator enables trainees to perform a thoracoscopic lobectomy. Twenty-eight surgical residents were randomized to either virtual-reality training on a nephrectomy module or traditional black-box simulator training. After a retention period they performed a thoracoscopic lobectomy on a porcine model and their performance was scored using a previously validated assessment tool. The groups did not differ in age or gender. All participants were able to complete the lobectomy. The performance of the black-box group was significantly faster during the test scenario than the virtual-reality group: 26.6 min (SD 6.7 min) versus 32.7 min (SD 7.5 min). No difference existed between the two groups when comparing bleeding and anatomical and non-anatomical errors. Simulation-based training and targeted instructions enabled the trainees to perform a simulated thoracoscopic lobectomy. Traditional black-box training was more effective than virtual-reality laparoscopy training. Thus, a dedicated simulator for thoracoscopy should be available before establishing systematic virtual-reality training programs for trainees in thoracic surgery.

  6. SEMIPARAMETRIC ADDITIVE RISKS REGRESSION FOR TWO-STAGE DESIGN SURVIVAL STUDIES

    PubMed Central

    Li, Gang; Wu, Tong Tong

    2011-01-01

    In this article we study a semiparametric additive risks model (McKeague and Sasieni (1994)) for two-stage design survival data where accurate information is available only on second stage subjects, a subset of the first stage study. We derive two-stage estimators by combining data from both stages. Large sample inferences are developed. As a by-product, we also obtain asymptotic properties of the single stage estimators of McKeague and Sasieni (1994) when the semiparametric additive risks model is misspecified. The proposed two-stage estimators are shown to be asymptotically more efficient than the second stage estimators. They also demonstrate smaller bias and variance for finite samples. The developed methods are illustrated using small intestine cancer data from the SEER (Surveillance, Epidemiology, and End Results) Program. PMID:21931467

  7. Highly-sensitive microRNA detection based on bio-bar-code assay and catalytic hairpin assembly two-stage amplification.

    PubMed

    Tang, Songsong; Gu, Yuan; Lu, Huiting; Dong, Haifeng; Zhang, Kai; Dai, Wenhao; Meng, Xiangdan; Yang, Fan; Zhang, Xueji

    2018-04-03

    Herein, a highly-sensitive microRNA (miRNA) detection strategy was developed by combining bio-bar-code assay (BBA) with catalytic hairpin assembly (CHA). In the proposed system, two nanoprobes of magnetic nanoparticles functionalized with DNA probes (MNPs-DNA) and gold nanoparticles with numerous barcode DNA (AuNPs-DNA) were designed. In the presence of target miRNA, the MNP-DNA and AuNP-DNA hybridized with target miRNA to form a "sandwich" structure. After "sandwich" structures were separated from the solution by the magnetic field and dehybridized by high temperature, the barcode DNA sequences were released by dissolving AuNPs. The released barcode DNA sequences triggered the toehold strand displacement assembly of two hairpin probes, leading to recycle of barcode DNA sequences and producing numerous fluorescent CHA products for miRNA detection. Under the optimal experimental conditions, the proposed two-stage amplification system could sensitively detect target miRNA ranging from 10 pM to 10 aM with a limit of detection (LOD) down to 97.9 zM. It displayed good capability to discriminate single base and three bases mismatch due to the unique sandwich structure. Notably, it presented good feasibility for selective multiplexed detection of various combinations of synthetic miRNA sequences and miRNAs extracted from different cell lysates, which were in agreement with the traditional polymerase chain reaction analysis. The two-stage amplification strategy may be significant implication in the biological detection and clinical diagnosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Two stage catalytic combustor

    NASA Technical Reports Server (NTRS)

    Alvin, Mary Anne (Inventor); Bachovchin, Dennis (Inventor); Smeltzer, Eugene E. (Inventor); Lippert, Thomas E. (Inventor); Bruck, Gerald J. (Inventor)

    2010-01-01

    A catalytic combustor (14) includes a first catalytic stage (30), a second catalytic stage (40), and an oxidation completion stage (49). The first catalytic stage receives an oxidizer (e.g., 20) and a fuel (26) and discharges a partially oxidized fuel/oxidizer mixture (36). The second catalytic stage receives the partially oxidized fuel/oxidizer mixture and further oxidizes the mixture. The second catalytic stage may include a passageway (47) for conducting a bypass portion (46) of the mixture past a catalyst (e.g., 41) disposed therein. The second catalytic stage may have an outlet temperature elevated sufficiently to complete oxidation of the mixture without using a separate ignition source. The oxidation completion stage is disposed downstream of the second catalytic stage and may recombine the bypass portion with a catalyst exposed portion (48) of the mixture and complete oxidation of the mixture. The second catalytic stage may also include a reticulated foam support (50), a honeycomb support, a tube support or a plate support.

  9. Modelling of Two-Stage Methane Digestion With Pretreatment of Biomass

    NASA Astrophysics Data System (ADS)

    Dychko, A.; Remez, N.; Opolinskyi, I.; Kraychuk, S.; Ostapchuk, N.; Yevtieieva, L.

    2018-04-01

    Systems of anaerobic digestion should be used for processing of organic waste. Managing the process of anaerobic recycling of organic waste requires reliable predicting of biogas production. Development of mathematical model of process of organic waste digestion allows determining the rate of biogas output at the two-stage process of anaerobic digestion considering the first stage. Verification of Konto's model, based on the studied anaerobic processing of organic waste, is implemented. The dependencies of biogas output and its rate from time are set and may be used to predict the process of anaerobic processing of organic waste.

  10. Comparisons of angularly and spectrally resolved Bremsstrahlung measurements to two-dimensional multi-stage simulations of short-pulse laser-plasma interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, C. D.; Kemp, A. J.; Pérez, F.

    2013-05-15

    A 2-D multi-stage simulation model incorporating realistic laser conditions and a fully resolved electron distribution handoff has been developed and compared to angularly and spectrally resolved Bremsstrahlung measurements from high-Z planar targets. For near-normal incidence and 0.5-1 × 10{sup 20} W/cm{sup 2} intensity, particle-in-cell (PIC) simulations predict the existence of a high energy electron component consistently directed away from the laser axis, in contrast with previous expectations for oblique irradiation. Measurements of the angular distribution are consistent with a high energy component when directed along the PIC predicted direction, as opposed to between the target normal and laser axis asmore » previously measured.« less

  11. Application of the two-stage clonal expansion model in characterizing the joint effect of exposure to two carcinogens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zielinski, J.M.; Krewski, D.

    1992-12-31

    In this paper, we describe application of the two-stage clonal expansion model to characterize the joint effect of exposure to two carcinogens. This biologically based model of carcinogenesis provides a useful framework for the quantitative description of carcinogenic risks and for defining agents that act as initiators, promoters, and completers. Depending on the mechanism of action, the agent-specific relative risk following exposure to two carcinogens can be additive, multiplicative, or supramultiplicative, with supra-additive relative risk indicating a synergistic effect between the two agents. Maximum-likelihood methods for fitting the two-stage clonal expansion model with intermittent exposure to two carcinogens are describedmore » and illustrated, using data on lung-cancer mortality among Colorado uranium miners exposed to both radon and tobacco smoke.« less

  12. Anti-kindling Induced by Two-Stage Coordinated Reset Stimulation with Weak Onset Intensity

    PubMed Central

    Zeitler, Magteld; Tass, Peter A.

    2016-01-01

    Abnormal neuronal synchrony plays an important role in a number of brain diseases. To specifically counteract abnormal neuronal synchrony by desynchronization, Coordinated Reset (CR) stimulation, a spatiotemporally patterned stimulation technique, was designed with computational means. In neuronal networks with spike timing–dependent plasticity CR stimulation causes a decrease of synaptic weights and finally anti-kindling, i.e., unlearning of abnormally strong synaptic connectivity and abnormal neuronal synchrony. Long-lasting desynchronizing aftereffects of CR stimulation have been verified in pre-clinical and clinical proof of concept studies. In general, for different neuromodulation approaches, both invasive and non-invasive, it is desirable to enable effective stimulation at reduced stimulation intensities, thereby avoiding side effects. For the first time, we here present a two-stage CR stimulation protocol, where two qualitatively different types of CR stimulation are delivered one after another, and the first stage comes at a particularly weak stimulation intensity. Numerical simulations show that a two-stage CR stimulation can induce the same degree of anti-kindling as a single-stage CR stimulation with intermediate stimulation intensity. This stimulation approach might be clinically beneficial in patients suffering from brain diseases characterized by abnormal neuronal synchrony where a first treatment stage should be performed at particularly weak stimulation intensities in order to avoid side effects. This might, e.g., be relevant in the context of acoustic CR stimulation in tinnitus patients with hyperacusis or in the case of electrical deep brain CR stimulation with sub-optimally positioned leads or side effects caused by stimulation of the target itself. We discuss how to apply our method in first in man and proof of concept studies. PMID:27242500

  13. The design of two-stage-to-orbit vehicles

    NASA Technical Reports Server (NTRS)

    1991-01-01

    Two separate student design groups developed conceptual designs for a two-stage-to-orbit vehicle, with each design group consisting of a carrier team and an orbiter team. A two-stage-to-orbit system is considered in the event that single-stage-to-orbit is deemed not feasible in the foreseeable future; the two-stage system would also be used as a complement to an already existing heavy lift vehicle. The design specifications given are to lift a 10,000-lb payload 27 ft long by 10 ft diameter, to low Earth orbit (300 n.m.) using an air breathing carrier configuration that will take off horizontally within 15,000 ft. The staging Mach number and altitude were to be determined by the design groups. One group designed a delta wing/body carrier with the orbiter nested within the fuselage of the carrier, and the other group produced a blended cranked-delta wing/body carrier with the orbiter in the more conventional piggyback configuration. Each carrier used liquid hydrogen-fueled turbofanramjet engines, with data provided by General Electric Aircraft Engine Group. While one orbiter used a full-scale Space Shuttle Main Engine (SSME), the other orbiter employed a half-scale SSME coupled with scramjet engines, with data again provided by General Electric. The two groups conceptual designs, along with the technical trade-offs, difficulties, and details that surfaced during the design process are presented.

  14. Two-Stage Variable Sample-Rate Conversion System

    NASA Technical Reports Server (NTRS)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  15. Correcting bias due to missing stage data in the non-parametric estimation of stage-specific net survival for colorectal cancer using multiple imputation.

    PubMed

    Falcaro, Milena; Carpenter, James R

    2017-06-01

    Population-based net survival by tumour stage at diagnosis is a key measure in cancer surveillance. Unfortunately, data on tumour stage are often missing for a non-negligible proportion of patients and the mechanism giving rise to the missingness is usually anything but completely at random. In this setting, restricting analysis to the subset of complete records gives typically biased results. Multiple imputation is a promising practical approach to the issues raised by the missing data, but its use in conjunction with the Pohar-Perme method for estimating net survival has not been formally evaluated. We performed a resampling study using colorectal cancer population-based registry data to evaluate the ability of multiple imputation, used along with the Pohar-Perme method, to deliver unbiased estimates of stage-specific net survival and recover missing stage information. We created 1000 independent data sets, each containing 5000 patients. Stage data were then made missing at random under two scenarios (30% and 50% missingness). Complete records analysis showed substantial bias and poor confidence interval coverage. Across both scenarios our multiple imputation strategy virtually eliminated the bias and greatly improved confidence interval coverage. In the presence of missing stage data complete records analysis often gives severely biased results. We showed that combining multiple imputation with the Pohar-Perme estimator provides a valid practical approach for the estimation of stage-specific colorectal cancer net survival. As usual, when the percentage of missing data is high the results should be interpreted cautiously and sensitivity analyses are recommended. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Dislocation based controlling of kinematic hardening contribution to simulate primary and secondary stages of uniaxial ratcheting

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, S.; Dhar, S.; Acharyya, S. K.

    2017-07-01

    The primary and secondary stages of the uniaxial ratcheting curve for the C-Mn steel SA333 have been investigated. Stress controlled uniaxial ratcheting experiments were conducted with different mean stresses and stress amplitudes to obtain curves showing the evolution of ratcheting strain with number of cycles. In stage-I of the ratcheting curve, a large accumulation of ratcheting strain occurs, but at a decreasing rate. In contrast, in stage-II a smaller accumulation of ratcheting strain is found and the ratcheting rate becomes almost constant. Transmission electron microscope observations reveal that no specific dislocation structures are developed during the early stages of ratcheting. Rather, compared with the case of low cycle fatigue, it is observed that sub-cell formation is delayed in the case of ratcheting. The increase in dislocation density as a result of the ratcheting strain is obtained using the Orowan equation. The ratcheting strain is obtained from the shift of the plastic strain memory surface. The dislocation rearrangement is incorporated in a functional form of dislocation density, which is used to calibrate the parameters of a kinematic hardening law. The observations are formulated in a material model, plugged into the ABAQUS finite element (FE) platform as a user material subroutine. Finally the FE-simulated ratcheting curves are compared with the experimental curves.

  17. Re-evaluation of link between interpregnancy interval and adverse birth outcomes: retrospective cohort study matching two intervals per mother

    PubMed Central

    Pereira, Gavin; Jacoby, Peter; de Klerk, Nicholas; Stanley, Fiona J

    2014-01-01

    Objective To re-evaluate the causal effect of interpregnancy interval on adverse birth outcomes, on the basis that previous studies relying on between mother comparisons may have inadequately adjusted for confounding by maternal risk factors. Design Retrospective cohort study using conditional logistic regression (matching two intervals per mother so each mother acts as her own control) to model the incidence of adverse birth outcomes as a function of interpregnancy interval; additional unconditional logistic regression with adjustment for confounders enabled comparison with the unmatched design of previous studies. Setting Perth, Western Australia, 1980-2010. Participants 40 441 mothers who each delivered three liveborn singleton neonates. Main outcome measures Preterm birth (<37 weeks), small for gestational age birth (<10th centile of birth weight by sex and gestational age), and low birth weight (<2500 g). Results Within mother analysis of interpregnancy intervals indicated a much weaker effect of short intervals on the odds of preterm birth and low birth weight compared with estimates generated using a traditional between mother analysis. The traditional unmatched design estimated an adjusted odds ratio for an interpregnancy interval of 0-5 months (relative to the reference category of 18-23 months) of 1.41 (95% confidence interval 1.31 to 1.51) for preterm birth, 1.26 (1.15 to 1.37) for low birth weight, and 0.98 (0.92 to 1.06) for small for gestational age birth. In comparison, the matched design showed a much weaker effect of short interpregnancy interval on preterm birth (odds ratio 1.07, 0.86 to 1.34) and low birth weight (1.03, 0.79 to 1.34), and the effect for small for gestational age birth remained small (1.08, 0.87 to 1.34). Both the unmatched and matched models estimated a high odds of small for gestational age birth and low birth weight for long interpregnancy intervals (longer than 59 months), but the estimated effect of long interpregnancy

  18. Simulation and Validation of Injection-Compression Filling Stage of Liquid Moulding with Fast Curing Resins

    NASA Astrophysics Data System (ADS)

    Martin, Ffion A.; Warrior, Nicholas A.; Simacek, Pavel; Advani, Suresh; Hughes, Adrian; Darlington, Roger; Senan, Eissa

    2018-03-01

    Very short manufacture cycle times are required if continuous carbon fibre and epoxy composite components are to be economically viable solutions for high volume composite production for the automotive industry. Here, a manufacturing process variant of resin transfer moulding (RTM), targets a reduction of in-mould manufacture time by reducing the time to inject and cure components. The process involves two stages; resin injection followed by compression. A flow simulation methodology using an RTM solver for the process has been developed. This paper compares the simulation prediction to experiments performed using industrial equipment. The issues encountered during the manufacturing are included in the simulation and their sensitivity to the process is explored.

  19. Performance evaluation of two-stage fuel cycle from SFR to PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fei, T.; Hoffman, E.A.; Kim, T.K.

    2013-07-01

    One potential fuel cycle option being considered is a two-stage fuel cycle system involving the continuous recycle of transuranics in a fast reactor and the use of bred plutonium in a thermal reactor. The first stage is a Sodium-cooled Fast Reactor (SFR) fuel cycle with metallic U-TRU-Zr fuel. The SFRs need to have a breeding ratio greater than 1.0 in order to produce fissile material for use in the second stage. The second stage is a PWR fuel cycle with uranium and plutonium mixed oxide fuel based on the design and performance of the current state-of-the-art commercial PWRs with anmore » average discharge burnup of 50 MWd/kgHM. This paper evaluates the possibility of this fuel cycle option and discusses its fuel cycle performance characteristics. The study focuses on an equilibrium stage of the fuel cycle. Results indicate that, in order to avoid a positive coolant void reactivity feedback in the stage-2 PWR, the reactor requires high quality of plutonium from the first stage and minor actinides in the discharge fuel of the PWR needs to be separated and sent back to the stage-1 SFR. The electricity-sharing ratio between the 2 stages is 87.0% (SFR) to 13.0% (PWR) for a TRU inventory ratio (the mass of TRU in the discharge fuel divided by the mass of TRU in the fresh fuel) of 1.06. A sensitivity study indicated that by increasing the TRU inventory ratio to 1.13, The electricity generation fraction of stage-2 PWR is increased to 28.9%. The two-stage fuel cycle system considered in this study was found to provide a high uranium utilization (>80%). (authors)« less

  20. Two-Stage Categorization in Brand Extension Evaluation: Electrophysiological Time Course Evidence

    PubMed Central

    Wang, Xiaoyi

    2014-01-01

    A brand name can be considered a mental category. Similarity-based categorization theory has been used to explain how consumers judge a new product as a member of a known brand, a process called brand extension evaluation. This study was an event-related potential study conducted in two experiments. The study found a two-stage categorization process reflected by the P2 and N400 components in brand extension evaluation. In experiment 1, a prime–probe paradigm was presented in a pair consisting of a brand name and a product name in three conditions, i.e., in-category extension, similar-category extension, and out-of-category extension. Although the task was unrelated to brand extension evaluation, P2 distinguished out-of-category extensions from similar-category and in-category ones, and N400 distinguished similar-category extensions from in-category ones. In experiment 2, a prime–probe paradigm with a related task was used, in which product names included subcategory and major-category product names. The N400 elicited by subcategory products was more significantly negative than that elicited by major-category products, with no salient difference in P2. We speculated that P2 could reflect the early low-level and similarity-based processing in the first stage, whereas N400 could reflect the late analytic and category-based processing in the second stage. PMID:25438152

  1. Life-Stage Physiologically-Based Pharmacokinetic (PBPK) ...

    EPA Pesticide Factsheets

    This presentation discusses methods used to extrapolate from in vitro high-throughput screening (HTS) toxicity data for an endocrine pathway to in vivo for early life stages in humans, and the use of a life stage PBPK model to address rapidly changing physiological parameters. Adverse outcome pathways (AOPs), in this case endocrine disruption during development, provide a biologically-based framework for linking molecular initiating events triggered by chemical exposures to key events leading to adverse outcomes. The application of AOPs to human health risk assessment requires extrapolation of in vitro HTS toxicity data to in vivo exposures (IVIVE) in humans, which can be achieved through the use of a PBPK/PD model. Exposure scenarios for chemicals in the PBPK/PD model will consider both placental and lactational transfer of chemicals, with a focus on age dependent dosimetry during fetal development and after birth for a nursing infant. This talk proposes a universal life-stage computational model that incorporates changing physiological parameters to link environmental exposures to in vitro levels of HTS assays related to a developmental toxicological AOP for vascular disruption. In vitro toxicity endpoints discussed are based on two mechanisms: 1) Fetal vascular disruption, and 2) Neurodevelopmental toxicity induced by altering thyroid hormone levels in neonates via inhibition of thyroperoxidase in the thyroid gland. Application of our Life-stage computati

  2. Runway Operations Planning: A Two-Stage Solution Methodology

    NASA Technical Reports Server (NTRS)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. Thus, Runway Operations Planning (ROP) is a critical component of airport operations planning in general and surface operations planning in particular. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, may be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. Generating optimal runway operations plans was approached in with a 'one-stage' optimization routine that considered all the desired objectives and constraints, and the characteristics of each aircraft (weight class, destination, Air Traffic Control (ATC) constraints) at the same time. Since, however, at any given point in time, there is less uncertainty in the predicted demand for departure resources in terms of weight class than in terms of specific aircraft, the ROP problem can be parsed into two stages. In the context of the Departure Planner (OP) research project, this paper introduces Runway Operations Planning (ROP) as part of the wider Surface Operations Optimization (SOO) and describes a proposed 'two stage' heuristic algorithm for solving the Runway Operations Planning (ROP) problem. Focus is specifically given on including runway crossings in the planning process of runway operations. In the first stage, sequences of departure class slots and runwy crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the

  3. Design and control of a decoupled two degree of freedom translational parallel micro-positioning stage.

    PubMed

    Lai, Lei-Jie; Gu, Guo-Ying; Zhu, Li-Min

    2012-04-01

    This paper presents a novel decoupled two degrees of freedom (2-DOF) translational parallel micro-positioning stage. The stage consists of a monolithic compliant mechanism driven by two piezoelectric actuators. The end-effector of the stage is connected to the base by four independent kinematic limbs. Two types of compound flexure module are serially connected to provide 2-DOF for each limb. The compound flexure modules and mirror symmetric distribution of the four limbs significantly reduce the input and output cross couplings and the parasitic motions. Based on the stiffness matrix method, static and dynamic models are constructed and optimal design is performed under certain constraints. The finite element analysis results are then given to validate the design model and a prototype of the XY stage is fabricated for performance tests. Open-loop tests show that maximum static and dynamic cross couplings between the two linear motions are below 0.5% and -45 dB, which are low enough to utilize the single-input-single-out control strategies. Finally, according to the identified dynamic model, an inversion-based feedforward controller in conjunction with a proportional-integral-derivative controller is applied to compensate for the nonlinearities and uncertainties. The experimental results show that good positioning and tracking performances are achieved, which verifies the effectiveness of the proposed mechanism and controller design. The resonant frequencies of the loaded stage at 2 kg and 5 kg are 105 Hz and 68 Hz, respectively. Therefore, the performance of the stage is reasonably good in term of a 200 N load capacity. © 2012 American Institute of Physics

  4. Nutrient Removal Benefits of Two-Stage Ditches

    NASA Astrophysics Data System (ADS)

    Liu, X.; Ward, A.

    2016-12-01

    Annually, about one-third of the corn and soybeans in the world is grown in the North Central Region of the United States. Water quality problems associated with these production systems are caused by: (1) discharges of dissolved reactive phosphorus into the Great Lakes and inland water bodies; and (2) discharges of nitrogen into the Gulf of Mexico. These discharges have caused large blue-green algal blooms in freshwater systems and hypoxia particularly in the Gulf of Mexico. Much of the region has poorly drained soils that necessitate the use of subsurface drainage system to make the fields farmable. These drains discharge into agricultural ditches that are usually 2-5 m deep and 10 to 20 m wide. These oversized ditches often form small grassed benches in their lower third. A common maintenance practice is to periodically cleanout these deposits. However, in the last 15 years a new practice has been developed by one of the co-authors. This practice does not disturb the lower portion of the ditch but widens the top portion to make the benches larger. This floodplain development practice is known as the two-stage ditch concept. The approach results in the ditches acting as intermittent linear wetlands. The practice is eligible for cost-sharing funding as a water quality Best Management Practice in Indiana and Ohio. This presentation will provide a summary of the research that has been conducted on two-stage ditches and in particular their nutrient removal potential. In addition, results of a new controlled study on the nitrogen and phosphorus removal performance of a two-stage ditch will be presented. This study introduced water with fixed concentrations of each nutrient into a two-stage ditch. Measurements were made of: (1) the retention time in the system; (2) changes in the surface water quality; (3) and changes in the water quality and water level elevations in nested piezometers and monitoring wells located in the benches and banks of the two-stage ditch.

  5. A Decision-making Model for a Two-stage Production-delivery System in SCM Environment

    NASA Astrophysics Data System (ADS)

    Feng, Ding-Zhong; Yamashiro, Mitsuo

    A decision-making model is developed for an optimal production policy in a two-stage production-delivery system that incorporates a fixed quantity supply of finished goods to a buyer at a fixed interval of time. First, a general cost model is formulated considering both supplier (of raw materials) and buyer (of finished products) sides. Then an optimal solution to the problem is derived on basis of the cost model. Using the proposed model and its optimal solution, one can determine optimal production lot size for each stage, optimal number of transportation for semi-finished goods, and optimal quantity of semi-finished goods transported each time to meet the lumpy demand of consumers. Also, we examine the sensitivity of raw materials ordering and production lot size to changes in ordering cost, transportation cost and manufacturing setup cost. A pragmatic computation approach for operational situations is proposed to solve integer approximation solution. Finally, we give some numerical examples.

  6. Improving a stage forecasting Muskingum model by relating local stage and remote discharge

    NASA Astrophysics Data System (ADS)

    Barbetta, S.; Moramarco, T.; Melone, F.; Brocca, L.

    2009-04-01

    Following the parsimonious concept of parameters, simplified models for flood forecasting based only on flood routing have been developed for flood-prone sites located downstream of a gauged station and at a distance allowing an appropriate forecasting lead-time. In this context, the Muskingum model can be a useful tool. However, critical points in hydrological routing are the representation of lateral inflows contribution and the knowledge of stage-discharge relationships. As regards the former, O'Donnell (O'Donnell, T., 1985. A direct three-parameter Muskingum procedure incorporating lateral inflow, Hydrol. Sci. J., 30[4/12], 479-496) proposed a three-parameter Muskingum procedure assuming the lateral inflows proportional to the contribution entering upstream. Using this approach, Franchini and Lamberti (Franchini, M. & Lamberti, P., 1994. A flood routing Muskingum type simulation and forecasting model based on level data alone, Water Resour. Res., 30[7], 2183-2196) presented a simple model Muskingum type to provide forecast water levels at the downstream end by selecting a routing time interval and, hence, a forecasting lead-time allowing to express the forecast stage as a function of only observed quantities. Moramarco et al. (Moramarco, T., Barbetta, S., Melone, F. & Singh, V.P., 2006. A real-time stage Muskingum forecasting model for a site without rating curve, Hydrol. Sci. J., 51[1], 66-82) enhanced the modeling scheme incorporating a procedure for adapting the parameter linked to lateral inflows. This last model, called STAFOM (STAge FOrecasting Model), was also extended to a two connected river branches schematization in order to improve significantly the forecasting lead-time. The STAFOM model provided satisfactory results for most of the analysed flood events observed in different river reaches in the Upper-Middle Tiber River basin in Central Italy. However, the analysis highlighted that the stage forecast should be enhanced when sudden modifications

  7. A two-stage method of quantitative flood risk analysis for reservoir real-time operation using ensemble-based hydrologic forecasts

    NASA Astrophysics Data System (ADS)

    Liu, P.

    2013-12-01

    Quantitative analysis of the risk for reservoir real-time operation is a hard task owing to the difficulty of accurate description of inflow uncertainties. The ensemble-based hydrologic forecasts directly depict the inflows not only the marginal distributions but also their persistence via scenarios. This motivates us to analyze the reservoir real-time operating risk with ensemble-based hydrologic forecasts as inputs. A method is developed by using the forecast horizon point to divide the future time into two stages, the forecast lead-time and the unpredicted time. The risk within the forecast lead-time is computed based on counting the failure number of forecast scenarios, and the risk in the unpredicted time is estimated using reservoir routing with the design floods and the reservoir water levels of forecast horizon point. As a result, a two-stage risk analysis method is set up to quantify the entire flood risks by defining the ratio of the number of scenarios that excessive the critical value to the total number of scenarios. The China's Three Gorges Reservoir (TGR) is selected as a case study, where the parameter and precipitation uncertainties are implemented to produce ensemble-based hydrologic forecasts. The Bayesian inference, Markov Chain Monte Carlo, is used to account for the parameter uncertainty. Two reservoir operation schemes, the real operated and scenario optimization, are evaluated for the flood risks and hydropower profits analysis. With the 2010 flood, it is found that the improvement of the hydrologic forecast accuracy is unnecessary to decrease the reservoir real-time operation risk, and most risks are from the forecast lead-time. It is therefore valuable to decrease the avarice of ensemble-based hydrologic forecasts with less bias for a reservoir operational purpose.

  8. A two-step automatic sleep stage classification method with dubious range detection.

    PubMed

    Sousa, Teresa; Cruz, Aniana; Khalighi, Sirvan; Pires, Gabriel; Nunes, Urbano

    2015-04-01

    The limitations of the current systems of automatic sleep stage classification (ASSC) are essentially related to the similarities between epochs from different sleep stages and the subjects' variability. Several studies have already identified the situations with the highest likelihood of misclassification in sleep scoring. Here, we took advantage of such information to develop an ASSC system based on knowledge of subjects' variability of some indicators that characterize sleep stages and on the American Academy of Sleep Medicine (AASM) rules. An ASSC system consisting of a two-step classifier is proposed. In the first step, epochs are classified using support vector machines (SVMs) spread into different nodes of a decision tree. In the post-processing step, the epochs suspected of misclassification (dubious classification) are tagged, and a new classification is suggested. Identification and correction are based on the AASM rules, and on misclassifications most commonly found/reported in automatic sleep staging. Six electroencephalographic and two electrooculographic channels were used to classify wake, non-rapid eye movement (NREM) sleep--N1, N2 and N3, and rapid eye movement (REM) sleep. The proposed system was tested in a dataset of 14 clinical polysomnographic records of subjects suspected of apnea disorders. Wake and REM epochs not falling in the dubious range, are classified with accuracy levels compatible with the requirements for clinical applications. The suggested correction assigned to the epochs that are tagged as dubious enhances the global results of all sleep stages. This approach provides reliable sleep staging results for non-dubious epochs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Simulation of the Beating Heart Based on Physically Modeling aDeformable Balloon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rohmer, Damien; Sitek, Arkadiusz; Gullberg, Grant T.

    2006-07-18

    The motion of the beating heart is complex and createsartifacts in SPECT and x-ray CT images. Phantoms such as the JaszczakDynamic Cardiac Phantom are used to simulate cardiac motion forevaluationof acquisition and data processing protocols used for cardiacimaging. Two concentric elastic membranes filled with water are connectedto tubing and pump apparatus for creating fluid flow in and out of theinner volume to simulate motion of the heart. In the present report, themovement of two concentric balloons is solved numerically in order tocreate a computer simulation of the motion of the moving membranes in theJaszczak Dynamic Cardiac Phantom. A system ofmore » differential equations,based on the physical properties, determine the motion. Two methods aretested for solving the system of differential equations. The results ofboth methods are similar providing a final shape that does not convergeto a trivial circular profile. Finally,a tomographic imaging simulationis performed by acquiring static projections of the moving shape andreconstructing the result to observe motion artifacts. Two cases aretaken into account: in one case each projection angle is sampled for ashort time interval and the other case is sampled for a longer timeinterval. The longer sampling acquisition shows a clear improvement indecreasing the tomographic streaking artifacts.« less

  10. A Bayesian predictive two-stage design for phase II clinical trials.

    PubMed

    Sambucini, Valeria

    2008-04-15

    In this paper, we propose a Bayesian two-stage design for phase II clinical trials, which represents a predictive version of the single threshold design (STD) recently introduced by Tan and Machin. The STD two-stage sample sizes are determined specifying a minimum threshold for the posterior probability that the true response rate exceeds a pre-specified target value and assuming that the observed response rate is slightly higher than the target. Unlike the STD, we do not refer to a fixed experimental outcome, but take into account the uncertainty about future data. In both stages, the design aims to control the probability of getting a large posterior probability that the true response rate exceeds the target value. Such a probability is expressed in terms of prior predictive distributions of the data. The performance of the design is based on the distinction between analysis and design priors, recently introduced in the literature. The properties of the method are studied when all the design parameters vary.

  11. Interval Predictor Models for Data with Measurement Uncertainty

    NASA Technical Reports Server (NTRS)

    Lacerda, Marcio J.; Crespo, Luis G.

    2017-01-01

    An interval predictor model (IPM) is a computational model that predicts the range of an output variable given input-output data. This paper proposes strategies for constructing IPMs based on semidefinite programming and sum of squares (SOS). The models are optimal in the sense that they yield an interval valued function of minimal spread containing all the observations. Two different scenarios are considered. The first one is applicable to situations where the data is measured precisely whereas the second one is applicable to data subject to known biases and measurement error. In the latter case, the IPMs are designed to fully contain regions in the input-output space where the data is expected to fall. Moreover, we propose a strategy for reducing the computational cost associated with generating IPMs as well as means to simulate them. Numerical examples illustrate the usage and performance of the proposed formulations.

  12. Optimization Study of the Ames 0.5 Two-Stage Light Gas Gun

    NASA Technical Reports Server (NTRS)

    Bogdanoff, D. W.

    1996-01-01

    There is a need for more faithful simulation of space debris impacts on various space vehicles. Space debris impact velocities can range up to 14 km/sec and conventional two-stage light gas guns with moderately heavy saboted projectiles are limited to launch velocities of 7-8 km/sec. Any increases obtained in the launch velocities will result in more faithful simulations of debris impacts. It would also be valuable to reduce the maximum gun and projectile base pressures and the gun barrel erosion rate. In this paper, the results of a computational fluid dynamics (CFD) study designed to optimize the performance of the NASA Ames 0.5' gun by systematically varying seven gun operating parameters are reported. Particularly beneficial effects were predicted to occur if (1) the piston mass was decreased together with the powder mass and the hydrogen fill pressure and (2) the pump tube length was decreased. The optimum set of changes in gun operating conditions were predicted to produce an increase in muzzle velocity of 0.7-1.0 km/sec, simultaneously with a substantial decrease in gun erosion. Preliminary experimental data have validated the code predictions. Velocities of up to 8.2 km/sec with a 0.475 cm diameter saboted aluminum sphere have been obtained, along with large reductions in gun erosion rates.

  13. A systematic review of the evidence for single stage and two stage revision of infected knee replacement

    PubMed Central

    2013-01-01

    Background Periprosthetic infection about the knee is a devastating complication that may affect between 1% and 5% of knee replacement. With over 79 000 knee replacements being implanted each year in the UK, periprosthetic infection (PJI) is set to become an important burden of disease and cost to the healthcare economy. One of the important controversies in treatment of PJI is whether a single stage revision operation is superior to a two-stage procedure. This study sought to systematically evaluate the published evidence to determine which technique had lowest reinfection rates. Methods A systematic review of the literature was undertaken using the MEDLINE and EMBASE databases with the aim to identify existing studies that present the outcomes of each surgical technique. Reinfection rate was the primary outcome measure. Studies of specific subsets of patients such as resistant organisms were excluded. Results 63 studies were identified that met the inclusion criteria. The majority of which (58) were reports of two-stage revision. Reinfection rated varied between 0% and 41% in two-stage studies, and 0% and 11% in single stage studies. No clinical trials were identified and the majority of studies were observational studies. Conclusions Evidence for both one-stage and two-stage revision is largely of low quality. The evidence basis for two-stage revision is significantly larger, and further work into direct comparison between the two techniques should be undertaken as a priority. PMID:23895421

  14. Binary Interval Search: a scalable algorithm for counting interval intersections.

    PubMed

    Layer, Ryan M; Skadron, Kevin; Robins, Gabriel; Hall, Ira M; Quinlan, Aaron R

    2013-01-01

    The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. https://github.com/arq5x/bits.

  15. A Two-Stage-to-Orbit Spaceplane Concept With Growth Potential

    NASA Technical Reports Server (NTRS)

    Mehta, Unmeel B.; Bowles, Jeffrey V.

    2001-01-01

    A two-stage-to-orbit (TSTO) spaceplane concept developed in 1993 is revisited, and new information is provided to assist in the development of the next-generation space transportation vehicles. The design philosophy, TSTO spaceplane concept, and the design method are briefly described. A trade study between cold and hot structures leads to the choice of cold structures with external thermal protection systems. The optimal Mach number for staging the second stage of the TSTO spaceplane (with air-breathing propulsion on the first stage) is 10, based on life-cycle cost analysis. The performance and specification of a prototype/experimental (P/X) TSTO spaceplane with a turbo/ram/scramjet propulsion system and built-in growth potential are presented and discussed. The internal rate of return on investment is the highest for the proposed TSTO spaceplane, vis-A-vis a single-stage-to-orbit (SSTO) rocket vehicle and a TSTO spaceplane without built-in growth. Additional growth potentials for the proposed spaceplane are suggested. This spaceplane can substantially decrease access-to-space cost and risk, and increase safety and reliability in the near term It can be a serious candidate for the next-generation space transportation system.

  16. A Dual-Stage Two-Phase Model of Selective Attention

    ERIC Educational Resources Information Center

    Hubner, Ronald; Steinhauser, Marco; Lehle, Carola

    2010-01-01

    The dual-stage two-phase (DSTP) model is introduced as a formal and general model of selective attention that includes both an early and a late stage of stimulus selection. Whereas at the early stage information is selected by perceptual filters whose selectivity is relatively limited, at the late stage stimuli are selected more efficiently on a…

  17. Two-stage anaerobic digestion enables heavy metal removal.

    PubMed

    Selling, Robert; Håkansson, Torbjörn; Björnsson, Lovisa

    2008-01-01

    To fully exploit the environmental benefits of the biogas process, the digestate should be recycled as biofertiliser to agriculture. This practice can however be jeopardized by the presence of unwanted compounds such as heavy metals in the digestate. By using two-stage digestion, where the first stage includes hydrolysis/acidification and liquefaction of the substrate, heavy metals can be transferred to the leachate. From the leachate, metals can then be removed by adsorption. In this study, up to 70% of the Ni, 40% of the Zn and 25% of the Cd present in maize was removed when the leachate from hydrolysis was circulated over a macroporous polyacrylamide column for 6 days. For Cu and Pb, the mobilization in the hydrolytic stage was lower which resulted in a low removal. A more efficient two-stage process with improved substrate hydrolysis would give lower pH and/or longer periods with low pH in the hydrolytic stage. This is likely to increase metal mobilisation, and would open up for an excellent opportunity of heavy metal removal.

  18. Ares I-X Upper Stage Simulator Residual Stress Analysis

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.; Brust, Frederick W.; Phillips, Dawn R.; Cheston, Derrick

    2008-01-01

    The structural analyses described in the present report were performed in support of the NASA Engineering and Safety Center (NESC) Critical Initial Flaw Size (CIFS) assessment for the Ares I-X Upper Stage Simulator (USS) common shell segment. An independent assessment was conducted to determine the critical initial flaw size (CIFS) for the flange-to-skin weld in the Ares I-X Upper Stage Simulator (USS). The Ares system of space launch vehicles is the US National Aeronautics and Space Administration s plan for replacement of the aging space shuttle. The new Ares space launch system is somewhat of a combination of the space shuttle system and the Saturn launch vehicles used prior to the shuttle. Here, a series of weld analyses are performed to determine the residual stresses in a critical region of the USS. Weld residual stresses both increase constraint and mean stress thereby having an important effect on fatigue and fracture life. The results of this effort served as one of the critical load inputs required to perform a CIFS assessment of the same segment.

  19. Development and validation of the simulation-based learning evaluation scale.

    PubMed

    Hung, Chang-Chiao; Liu, Hsiu-Chen; Lin, Chun-Chih; Lee, Bih-O

    2016-05-01

    The instruments that evaluate a student's perception of receiving simulated training are English versions and have not been tested for reliability or validity. The aim of this study was to develop and validate a Chinese version Simulation-Based Learning Evaluation Scale (SBLES). Four stages were conducted to develop and validate the SBLES. First, specific desired competencies were identified according to the National League for Nursing and Taiwan Nursing Accreditation Council core competencies. Next, the initial item pool was comprised of 50 items related to simulation that were drawn from the literature of core competencies. Content validity was established by use of an expert panel. Finally, exploratory factor analysis and confirmatory factor analysis were conducted for construct validity, and Cronbach's coefficient alpha determined the scale's internal consistency reliability. Two hundred and fifty students who had experienced simulation-based learning were invited to participate in this study. Two hundred and twenty-five students completed and returned questionnaires (response rate=90%). Six items were deleted from the initial item pool and one was added after an expert panel review. Exploratory factor analysis with varimax rotation revealed 37 items remaining in five factors which accounted for 67% of the variance. The construct validity of SBLES was substantiated in a confirmatory factor analysis that revealed a good fit of the hypothesized factor structure. The findings tally with the criterion of convergent and discriminant validity. The range of internal consistency for five subscales was .90 to .93. Items were rated on a 5-point scale from 1 (strongly disagree) to 5 (strongly agree). The results of this study indicate that the SBLES is valid and reliable. The authors recommend that the scale could be applied in the nursing school to evaluate the effectiveness of simulation-based learning curricula. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Time interval between endometrial biopsy and surgical staging for type I endometrial cancer: association between tumor characteristics and survival outcome.

    PubMed

    Matsuo, Koji; Opper, Neisha R; Ciccone, Marcia A; Garcia, Jocelyn; Tierney, Katherine E; Baba, Tsukasa; Muderspach, Laila I; Roman, Lynda D

    2015-02-01

    To examine whether wait time between endometrial biopsy and surgical staging correlates with tumor characteristics and affects survival outcomes in patients with type I endometrial cancer. A retrospective study was conducted to examine patients with grade 1 and 2 endometrioid adenocarcinoma diagnosed by preoperative endometrial biopsy who subsequently underwent hysterectomy-based surgical staging between 2000 and 2013. Patients who received neoadjuvant chemotherapy or hormonal treatment were excluded. Time interval and grade change between endometrial biopsy and hysterectomy were correlated to demographics and survival outcomes. Median wait time was 57 days (range 1-177 days) among 435 patients. Upgrading of the tumor to grade 3 in the hysterectomy specimen was seen in 4.7% of 321 tumors classified as grade 1 and 18.4% of 114 tumors classified as grade 2 on the endometrial biopsy, respectively. Wait time was not associated with grade change (P>.05). Controlling for age, ethnicity, body habitus, medical comorbidities, CA 125 level, and stage, multivariable analysis revealed that wait time was not associated with survival outcomes (5-year overall survival rates, wait time 1-14, 15-42, 43-84, and 85 days or more; 62.5%, 93.6%, 95.2%, and 100%, respectively, P>.05); however, grade 1 to 3 on the hysterectomy specimen remained as an independent prognosticator associated with decreased survival (5-year overall survival rates, grade 1 to 3 compared with grade change 1 to 1, 82.1% compared with 98.5%, P=.01). Among grade 1 preoperative biopsies, grade 1 to 3 was significantly associated with nonobesity (P=.039) and advanced stage (P=.019). Wait time for surgical staging was not associated with decreased survival outcome in patients with type I endometrial cancer.

  1. Optimize the Coverage Probability of Prediction Interval for Anomaly Detection of Sensor-Based Monitoring Series

    PubMed Central

    Liu, Datong; Peng, Yu; Peng, Xiyuan

    2018-01-01

    Effective anomaly detection of sensing data is essential for identifying potential system failures. Because they require no prior knowledge or accumulated labels, and provide uncertainty presentation, the probability prediction methods (e.g., Gaussian process regression (GPR) and relevance vector machine (RVM)) are especially adaptable to perform anomaly detection for sensing series. Generally, one key parameter of prediction models is coverage probability (CP), which controls the judging threshold of the testing sample and is generally set to a default value (e.g., 90% or 95%). There are few criteria to determine the optimal CP for anomaly detection. Therefore, this paper designs a graphic indicator of the receiver operating characteristic curve of prediction interval (ROC-PI) based on the definition of the ROC curve which can depict the trade-off between the PI width and PI coverage probability across a series of cut-off points. Furthermore, the Youden index is modified to assess the performance of different CPs, by the minimization of which the optimal CP is derived by the simulated annealing (SA) algorithm. Experiments conducted on two simulation datasets demonstrate the validity of the proposed method. Especially, an actual case study on sensing series from an on-orbit satellite illustrates its significant performance in practical application. PMID:29587372

  2. Research on Operation Strategy for Bundled Wind-thermal Generation Power Systems Based on Two-Stage Optimization Model

    NASA Astrophysics Data System (ADS)

    Sun, Congcong; Wang, Zhijie; Liu, Sanming; Jiang, Xiuchen; Sheng, Gehao; Liu, Tianyu

    2017-05-01

    Wind power has the advantages of being clean and non-polluting and the development of bundled wind-thermal generation power systems (BWTGSs) is one of the important means to improve wind power accommodation rate and implement “clean alternative” on generation side. A two-stage optimization strategy for BWTGSs considering wind speed forecasting results and load characteristics is proposed. By taking short-term wind speed forecasting results of generation side and load characteristics of demand side into account, a two-stage optimization model for BWTGSs is formulated. By using the environmental benefit index of BWTGSs as the objective function, supply-demand balance and generator operation as the constraints, the first-stage optimization model is developed with the chance-constrained programming theory. By using the operation cost for BWTGSs as the objective function, the second-stage optimization model is developed with the greedy algorithm. The improved PSO algorithm is employed to solve the model and numerical test verifies the effectiveness of the proposed strategy.

  3. Modified stochastic fragmentation of an interval as an ageing process

    NASA Astrophysics Data System (ADS)

    Fortin, Jean-Yves

    2018-02-01

    We study a stochastic model based on modified fragmentation of a finite interval. The mechanism consists of cutting the interval at a random location and substituting a unique fragment on the right of the cut to regenerate and preserve the interval length. This leads to a set of segments of random sizes, with the accumulation of small fragments near the origin. This model is an example of record dynamics, with the presence of ‘quakes’ and slow dynamics. The fragment size distribution is a universal inverse power law with logarithmic corrections. The exact distribution for the fragment number as function of time is simply related to the unsigned Stirling numbers of the first kind. Two-time correlation functions are defined, and computed exactly. They satisfy scaling relations, and exhibit aging phenomena. In particular, the probability that the same number of fragments is found at two different times t>s is asymptotically equal to [4πlog(s)]-1/2 when s\\gg 1 and the ratio t/s is fixed, in agreement with the numerical simulations. The same process with a reset impedes the aging phenomenon-beyond a typical time scale defined by the reset parameter.

  4. Coupling effect and control strategies of the maglev dual-stage inertially stabilization system based on frequency-domain analysis.

    PubMed

    Lin, Zhuchong; Liu, Kun; Zhang, Li; Zeng, Delin

    2016-09-01

    Maglev dual-stage inertially stabilization (MDIS) system is a newly proposed system which combines a conventional two-axis gimbal assembly and a 5-DOF (degree of freedom) magnetic bearing with vernier tilting capacity to perform dual-stage stabilization for the LOS of the suspended optical instrument. Compared with traditional dual-stage system, maglev dual-stage system exhibits different characteristics due to the negative position stiffness of the magnetic forces, which introduces additional coupling in the dual stage control system. In this paper, the coupling effect on the system performance is addressed based on frequency-domain analysis, including disturbance rejection, fine stage saturation and coarse stage structural resonance suppression. The difference between various control strategies is also discussed, including pile-up(PU), stabilize-follow (SF) and stabilize-compensate (SC). A number of principles for the design of a maglev dual stage system are proposed. A general process is also suggested, which leads to a cost-effective design striking a balance between high performance and complexity. At last, a simulation example is presented to illustrate the arguments in the paper. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Two stage sorption type cryogenic refrigerator including heat regeneration system

    NASA Technical Reports Server (NTRS)

    Jones, Jack A. (Inventor); Wen, Liang-Chi (Inventor); Bard, Steven (Inventor)

    1989-01-01

    A lower stage chemisorption refrigeration system physically and functionally coupled to an upper stage physical adsorption refrigeration system is disclosed. Waste heat generated by the lower stage cycle is regenerated to fuel the upper stage cycle thereby greatly improving the energy efficiency of a two-stage sorption refrigerator. The two stages are joined by disposing a first pressurization chamber providing a high pressure flow of a first refrigerant for the lower stage refrigeration cycle within a second pressurization chamber providing a high pressure flow of a second refrigerant for the upper stage refrigeration cycle. The first pressurization chamber is separated from the second pressurization chamber by a gas-gap thermal switch which at times is filled with a thermoconductive fluid to allow conduction of heat from the first pressurization chamber to the second pressurization chamber.

  6. Development and Validation of a Mobile Device-based External Ventricular Drain Simulator.

    PubMed

    Morone, Peter J; Bekelis, Kimon; Root, Brandon K; Singer, Robert J

    2017-10-01

    Multiple external ventricular drain (EVD) simulators have been created, yet their cost, bulky size, and nonreusable components limit their accessibility to residency programs. To create and validate an animated EVD simulator that is accessible on a mobile device. We developed a mobile-based EVD simulator that is compatible with iOS (Apple Inc., Cupertino, California) and Android-based devices (Google, Mountain View, California) and can be downloaded from the Apple App and Google Play Store. Our simulator consists of a learn mode, which teaches users the procedure, and a test mode, which assesses users' procedural knowledge. Twenty-eight participants, who were divided into expert and novice categories, completed the simulator in test mode and answered a postmodule survey. This was graded using a 5-point Likert scale, with 5 representing the highest score. Using the survey results, we assessed the module's face and content validity, whereas construct validity was evaluated by comparing the expert and novice test scores. Participants rated individual survey questions pertaining to face and content validity a median score of 4 out of 5. When comparing test scores, generated by the participants completing the test mode, the experts scored higher than the novices (mean, 71.5; 95% confidence interval, 69.2 to 73.8 vs mean, 48; 95% confidence interval, 44.2 to 51.6; P < .001). We created a mobile-based EVD simulator that is inexpensive, reusable, and accessible. Our results demonstrate that this simulator is face, content, and construct valid. Copyright © 2017 by the Congress of Neurological Surgeons

  7. Characterizing and differentiating task-based and resting state fMRI signals via two-stage sparse representations.

    PubMed

    Zhang, Shu; Li, Xiang; Lv, Jinglei; Jiang, Xi; Guo, Lei; Liu, Tianming

    2016-03-01

    A relatively underexplored question in fMRI is whether there are intrinsic differences in terms of signal composition patterns that can effectively characterize and differentiate task-based or resting state fMRI (tfMRI or rsfMRI) signals. In this paper, we propose a novel two-stage sparse representation framework to examine the fundamental difference between tfMRI and rsfMRI signals. Specifically, in the first stage, the whole-brain tfMRI or rsfMRI signals of each subject were composed into a big data matrix, which was then factorized into a subject-specific dictionary matrix and a weight coefficient matrix for sparse representation. In the second stage, all of the dictionary matrices from both tfMRI/rsfMRI data across multiple subjects were composed into another big data-matrix, which was further sparsely represented by a cross-subjects common dictionary and a weight matrix. This framework has been applied on the recently publicly released Human Connectome Project (HCP) fMRI data and experimental results revealed that there are distinctive and descriptive atoms in the cross-subjects common dictionary that can effectively characterize and differentiate tfMRI and rsfMRI signals, achieving 100% classification accuracy. Moreover, our methods and results can be meaningfully interpreted, e.g., the well-known default mode network (DMN) activities can be recovered from the very noisy and heterogeneous aggregated big-data of tfMRI and rsfMRI signals across all subjects in HCP Q1 release.

  8. Forensic age estimation based on development of third molars: a staging technique for magnetic resonance imaging.

    PubMed

    De Tobel, J; Phlypo, I; Fieuws, S; Politis, C; Verstraete, K L; Thevissen, P W

    2017-12-01

    The development of third molars can be evaluated with medical imaging to estimate age in subadults. The appearance of third molars on magnetic resonance imaging (MRI) differs greatly from that on radiographs. Therefore a specific staging technique is necessary to classify third molar development on MRI and to apply it for age estimation. To develop a specific staging technique to register third molar development on MRI and to evaluate its performance for age estimation in subadults. Using 3T MRI in three planes, all third molars were evaluated in 309 healthy Caucasian participants from 14 to 26 years old. According to the appearance of the developing third molars on MRI, descriptive criteria and schematic representations were established to define a specific staging technique. Two observers, with different levels of experience, staged all third molars independently with the developed technique. Intra- and inter-observer agreement were calculated. The data were imported in a Bayesian model for age estimation as described by Fieuws et al. (2016). This approach adequately handles correlation between age indicators and missing age indicators. It was used to calculate a point estimate and a prediction interval of the estimated age. Observed age minus predicted age was calculated, reflecting the error of the estimate. One-hundred and sixty-six third molars were agenetic. Five percent (51/1096) of upper third molars and 7% (70/1044) of lower third molars were not assessable. Kappa for inter-observer agreement ranged from 0.76 to 0.80. For intra-observer agreement kappa ranged from 0.80 to 0.89. However, two stage differences between observers or between staging sessions occurred in up to 2.2% (20/899) of assessments, probably due to a learning effect. Using the Bayesian model for age estimation, a mean absolute error of 2.0 years in females and 1.7 years in males was obtained. Root mean squared error equalled 2.38 years and 2.06 years respectively. The performance to

  9. Using Two Simulation Tools to Teach Concepts in Introductory Astronomy: A Design-Based Research Approach

    NASA Astrophysics Data System (ADS)

    Maher, Pamela A.

    Technology in college classrooms has gone from being an enhancement to the learning experience to being something expected by both instructors and students. This design-based research investigation takes technology one step further, putting the tools used to teach directly in the hands of students. The study examined the affordances and constraints of two simulation tools for use in introductory astronomy courses. The variety of experiences participants had using two tools; a virtual reality headset and fulldome immersive planetarium simulation, to manipulate a lunar surface flyby were identified using a multi-method research approach with N = 67 participants. Participants were recruited from classes of students taking astronomy over one academic year at a two-year college. Participants manipulated a lunar flyby using a virtual reality headset and a motion sensor device in the college fulldome planetarium. Data were collected in the form of two post-treatment questionnaires using Likert-type scales and one small group interview. The small group interview was intended to elicit various experiences participants had using the tools. Responses were analyzed quantitatively for optimal flyby speed and qualitatively for salient themes using data reduction informed by a methodological framework of phenomenography to identify the variety of experiences participants had using the tools. Findings for optimal flyby speed of the Moon based on analysis of data for both the Immersion Questionnaire and the Simulator Sickness Questionnaire done using SPSS software determine that the optimal flyby speed for college students to manipulate the Moon was calculated to be .04 x the radius of the Earth (3,959 miles) or 160 miles per second. A variety of different participant experiences were revealed using MAXQDA software to code positive and negative remarks participants had when engaged in the use of each tool. Both tools offer potential to actively engage students with astronomy

  10. Steady and Unsteady Simulations of the Flow in an Impeller/Diffuser Stage

    NASA Technical Reports Server (NTRS)

    Canabal, Francisco; Dorney, Daniel J.; Garcia, Roberto; Turner, James E. (Technical Monitor)

    2002-01-01

    SLI engine designs will require pumps to throttle over a wide flow range while maintaining high performance. Unsteadiness generated by impeller/diffuser interaction is one of the major factors affecting off-design performance. Initial unsteady simulations are completed for impeller/diffuser stage. The Corsair simulations will continue across a wide flow range and for inducer/impeller/diffuser combinations. Results of unsteady simulations are being used to guide and explore new designs.

  11. The Production of High Purity Phycocyanin by Spirulina platensis Using Light-Emitting Diodes Based Two-Stage Cultivation.

    PubMed

    Lee, Sang-Hyo; Lee, Ju Eun; Kim, Yoori; Lee, Seung-Yop

    2016-01-01

    Phycocyanin is a photosynthetic pigment found in photosynthetic cyanobacteria, cryptophytes, and red algae. In general, production of phycocyanin depends mainly on the light conditions during the cultivation period, and purification of phycocyanin requires expensive and complex procedures. In this study, we propose a new two-stage cultivation method to maximize the quantitative content and purity of phycocyanin obtained from Spirulina platensis using red and blue light-emitting diodes (LEDs) under different light intensities. In the first stage, Spirulina was cultured under a combination of red and blue LEDs to obtain the fast growth rate until reaching an absorbance of 1.4-1.6 at 680 nm. Next, blue LEDs were used to enhance the concentration and purity of the phycocyanin in Spirulina. Two weeks of the two-stage cultivation of Spirulina yielded 1.28 mg mL(-1) phycocyanin with the purity of 2.7 (OD620/OD280).

  12. Investigation to biodiesel production by the two-step homogeneous base-catalyzed transesterification.

    PubMed

    Ye, Jianchu; Tu, Song; Sha, Yong

    2010-10-01

    For the two-step transesterification biodiesel production made from the sunflower oil, based on the kinetics model of the homogeneous base-catalyzed transesterification and the liquid-liquid phase equilibrium of the transesterification product, the total methanol/oil mole ratio, the total reaction time, and the split ratios of methanol and reaction time between the two reactors in the stage of the two-step reaction are determined quantitatively. In consideration of the transesterification intermediate product, both the traditional distillation separation process and the improved separation process of the two-step reaction product are investigated in detail by means of the rigorous process simulation. In comparison with the traditional distillation process, the improved separation process of the two-step reaction product has distinct advantage in the energy duty and equipment requirement due to replacement of the costly methanol-biodiesel distillation column. Copyright 2010 Elsevier Ltd. All rights reserved.

  13. Controlling the type I error rate in two-stage sequential adaptive designs when testing for average bioequivalence.

    PubMed

    Maurer, Willi; Jones, Byron; Chen, Ying

    2018-05-10

    In a 2×2 crossover trial for establishing average bioequivalence (ABE) of a generic agent and a currently marketed drug, the recommended approach to hypothesis testing is the two one-sided test (TOST) procedure, which depends, among other things, on the estimated within-subject variability. The power of this procedure, and therefore the sample size required to achieve a minimum power, depends on having a good estimate of this variability. When there is uncertainty, it is advisable to plan the design in two stages, with an interim sample size reestimation after the first stage, using an interim estimate of the within-subject variability. One method and 3 variations of doing this were proposed by Potvin et al. Using simulation, the operating characteristics, including the empirical type I error rate, of the 4 variations (called Methods A, B, C, and D) were assessed by Potvin et al and Methods B and C were recommended. However, none of these 4 variations formally controls the type I error rate of falsely claiming ABE, even though the amount of inflation produced by Method C was considered acceptable. A major disadvantage of assessing type I error rate inflation using simulation is that unless all possible scenarios for the intended design and analysis are investigated, it is impossible to be sure that the type I error rate is controlled. Here, we propose an alternative, principled method of sample size reestimation that is guaranteed to control the type I error rate at any given significance level. This method uses a new version of the inverse-normal combination of p-values test, in conjunction with standard group sequential techniques, that is more robust to large deviations in initial assumptions regarding the variability of the pharmacokinetic endpoints. The sample size reestimation step is based on significance levels and power requirements that are conditional on the first-stage results. This necessitates a discussion and exploitation of the peculiar properties

  14. River water quality management considering agricultural return flows: application of a nonlinear two-stage stochastic fuzzy programming.

    PubMed

    Tavakoli, Ali; Nikoo, Mohammad Reza; Kerachian, Reza; Soltani, Maryam

    2015-04-01

    In this paper, a new fuzzy methodology is developed to optimize water and waste load allocation (WWLA) in rivers under uncertainty. An interactive two-stage stochastic fuzzy programming (ITSFP) method is utilized to handle parameter uncertainties, which are expressed as fuzzy boundary intervals. An iterative linear programming (ILP) is also used for solving the nonlinear optimization model. To accurately consider the impacts of the water and waste load allocation strategies on the river water quality, a calibrated QUAL2Kw model is linked with the WWLA optimization model. The soil, water, atmosphere, and plant (SWAP) simulation model is utilized to determine the quantity and quality of each agricultural return flow. To control pollution loads of agricultural networks, it is assumed that a part of each agricultural return flow can be diverted to an evaporation pond and also another part of it can be stored in a detention pond. In detention ponds, contaminated water is exposed to solar radiation for disinfecting pathogens. Results of applying the proposed methodology to the Dez River system in the southwestern region of Iran illustrate its effectiveness and applicability for water and waste load allocation in rivers. In the planning phase, this methodology can be used for estimating the capacities of return flow diversion system and evaporation and detention ponds.

  15. Single-stage Acetabular Revision During Two-stage THA Revision for Infection is Effective in Selected Patients.

    PubMed

    Fink, Bernd; Schlumberger, Michael; Oremek, Damian

    2017-08-01

    The treatment of periprosthetic infections of hip arthroplasties typically involves use of either a single- or two-stage (with implantation of a temporary spacer) revision surgery. In patients with severe acetabular bone deficiencies, either already present or after component removal, spacers cannot be safely implanted. In such hips where it is impossible to use spacers and yet a two-stage revision of the prosthetic stem is recommended, we have combined a two-stage revision of the stem with a single revision of the cup. To our knowledge, this approach has not been reported before. (1) What proportion of patients treated with single-stage acetabular reconstruction as part of a two-stage revision for an infected THA remain free from infection at 2 or more years? (2) What are the Harris hip scores after the first stage and at 2 years or more after the definitive reimplantation? Between June 2009 and June 2014, we treated all patients undergoing surgical treatment for an infected THA using a single-stage acetabular revision as part of a two-stage THA exchange if the acetabular defect classification was Paprosky Types 2B, 2C, 3A, 3B, or pelvic discontinuity and a two-stage procedure was preferred for the femur. The procedure included removal of all components, joint débridement, definitive acetabular reconstruction (with a cage to bridge the defect, and a cemented socket), and a temporary cemented femoral component at the first stage; the second stage consisted of repeat joint and femoral débridement and exchange of the femoral component to a cementless device. During the period noted, 35 patients met those definitions and were treated with this approach. No patients were lost to followup before 2 years; mean followup was 42 months (range, 24-84 months). The clinical evaluation was performed with the Harris hip scores and resolution of infection was assessed by the absence of clinical signs of infection and a C-reactive protein level less than 10 mg/L. All

  16. Two-stage coal gasification and desulfurization apparatus

    DOEpatents

    Bissett, Larry A.; Strickland, Larry D.

    1991-01-01

    The present invention is directed to a system which effectively integrates a two-stage, fixed-bed coal gasification arrangement with hot fuel gas desulfurization of a first stream of fuel gas from a lower stage of the two-stage gasifier and the removal of sulfur from the sulfur sorbent regeneration gas utilized in the fuel-gas desulfurization process by burning a second stream of fuel gas from the upper stage of the gasifier in a combustion device in the presence of calcium-containing material. The second stream of fuel gas is taken from above the fixed bed in the coal gasifier and is laden with ammonia, tar and sulfur values. This second stream of fuel gas is burned in the presence of excess air to provide heat energy sufficient to effect a calcium-sulfur compound forming reaction between the calcium-containing material and sulfur values carried by the regeneration gas and the second stream of fuel gas. Any ammonia values present in the fuel gas are decomposed during the combustion of the fuel gas in the combustion chamber. The substantially sulfur-free products of combustion may then be combined with the desulfurized fuel gas for providing a combustible fluid utilized for driving a prime mover.

  17. Control Parameters Optimization Based on Co-Simulation of a Mechatronic System for an UA-Based Two-Axis Inertially Stabilized Platform.

    PubMed

    Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao

    2015-08-14

    This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS(®); then, to analyze the system's kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB(®) SIMULINK(®) controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance.

  18. Control Parameters Optimization Based on Co-Simulation of a Mechatronic System for an UA-Based Two-Axis Inertially Stabilized Platform

    PubMed Central

    Zhou, Xiangyang; Zhao, Beilei; Gong, Guohao

    2015-01-01

    This paper presents a method based on co-simulation of a mechatronic system to optimize the control parameters of a two-axis inertially stabilized platform system (ISP) applied in an unmanned airship (UA), by which high control performance and reliability of the ISP system are achieved. First, a three-dimensional structural model of the ISP is built by using the three-dimensional parametric CAD software SOLIDWORKS®; then, to analyze the system’s kinematic and dynamic characteristics under operating conditions, dynamics modeling is conducted by using the multi-body dynamics software ADAMS™, thus the main dynamic parameters such as displacement, velocity, acceleration and reaction curve are obtained, respectively, through simulation analysis. Then, those dynamic parameters were input into the established MATLAB® SIMULINK® controller to simulate and test the performance of the control system. By these means, the ISP control parameters are optimized. To verify the methods, experiments were carried out by applying the optimized parameters to the control system of a two-axis ISP. The results show that the co-simulation by using virtual prototyping (VP) is effective to obtain optimized ISP control parameters, eventually leading to high ISP control performance. PMID:26287210

  19. Design of time interval generator based on hybrid counting method

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  20. Graphic analysis and multifractal on percolation-based return interval series

    NASA Astrophysics Data System (ADS)

    Pei, A. Q.; Wang, J.

    2015-05-01

    A financial time series model is developed and investigated by the oriented percolation system (one of the statistical physics systems). The nonlinear and statistical behaviors of the return interval time series are studied for the proposed model and the real stock market by applying visibility graph (VG) and multifractal detrended fluctuation analysis (MF-DFA). We investigate the fluctuation behaviors of return intervals of the model for different parameter settings, and also comparatively study these fluctuation patterns with those of the real financial data for different threshold values. The empirical research of this work exhibits the multifractal features for the corresponding financial time series. Further, the VGs deviated from both of the simulated data and the real data show the behaviors of small-world, hierarchy, high clustering and power-law tail for the degree distributions.

  1. Binary Interval Search: a scalable algorithm for counting interval intersections

    PubMed Central

    Layer, Ryan M.; Skadron, Kevin; Robins, Gabriel; Hall, Ira M.; Quinlan, Aaron R.

    2013-01-01

    Motivation: The comparison of diverse genomic datasets is fundamental to understand genome biology. Researchers must explore many large datasets of genome intervals (e.g. genes, sequence alignments) to place their experimental results in a broader context and to make new discoveries. Relationships between genomic datasets are typically measured by identifying intervals that intersect, that is, they overlap and thus share a common genome interval. Given the continued advances in DNA sequencing technologies, efficient methods for measuring statistically significant relationships between many sets of genomic features are crucial for future discovery. Results: We introduce the Binary Interval Search (BITS) algorithm, a novel and scalable approach to interval set intersection. We demonstrate that BITS outperforms existing methods at counting interval intersections. Moreover, we show that BITS is intrinsically suited to parallel computing architectures, such as graphics processing units by illustrating its utility for efficient Monte Carlo simulations measuring the significance of relationships between sets of genomic intervals. Availability: https://github.com/arq5x/bits. Contact: arq5x@virginia.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23129298

  2. The design of two stage to orbit vehicles

    NASA Astrophysics Data System (ADS)

    Gregorek, G. M.; Ramsay, T. N.

    1991-09-01

    Two designs are presented for a two-stage-to-orbit vehicle to complement an existing heavy lift vehicle. The payload is 10,000 lbs and 27 ft long by 10 ft in diameter for design purposes and must be carried to a low earth orbit by an air-breathing carrier configuration that can take off horizontally within 15,000 ft. Two designs are presented: a delta wing/body carrier in which the fuselage contains the orbiter; and a cranked-delta wing/body carrier in which the orbiter is carried piggy back. The engines for both carriers are turbofanramjets powered with liquid hydrogen, and the orbiters employ either a Space Shuttle Main Engine or a half-scale version with additional scramjet engines. The orbiter based on a full-scale Space Shuttle Main Engine is found to have a significantly higher takeoff weight which results in a higher total takeoff weight.

  3. Performance of two-stage fan having low-aspect-ratio first-stage rotor blading

    NASA Technical Reports Server (NTRS)

    Urasek, D. C.; Gorrell, W. T.; Cunnan, W. S.

    1979-01-01

    The NASA two stage fan was tested with a low aspect ratio first stage rotor having no midspan dampers. At design speed the fan achieved an adiabatic design efficiency of 0.846, and peak efficiencies for the first stage and rotor of 0.870 and 0.906, respectively. Peak efficiency occurred very close to the stall line. In an attempt to improve stall margin, the fan was retested with circumferentially grooved casing treatment and with a series of stator blade resets. Results showed no improvement in stall margin with casing treatment but increased to 8 percent with stator blade reset.

  4. EOG and EMG: two important switches in automatic sleep stage classification.

    PubMed

    Estrada, E; Nazeran, H; Barragan, J; Burk, J R; Lucas, E A; Behbehani, K

    2006-01-01

    Sleep is a natural periodic state of rest for the body, in which the eyes are usually closed and consciousness is completely or partially lost. In this investigation we used the EOG and EMG signals acquired from 10 patients undergoing overnight polysomnography with their sleep stages determined by expert sleep specialists based on RK rules. Differentiation between Stage 1, Awake and REM stages challenged a well trained neural network classifier to distinguish between classes when only EEG-derived signal features were used. To meet this challenge and improve the classification rate, extra features extracted from EOG and EMG signals were fed to the classifier. In this study, two simple feature extraction algorithms were applied to EOG and EMG signals. The statistics of the results were calculated and displayed in an easy to visualize fashion to observe tendencies for each sleep stage. Inclusion of these features show a great promise to improve the classification rate towards the target rate of 100%

  5. Fault tolerant control based on interval type-2 fuzzy sliding mode controller for coaxial trirotor aircraft.

    PubMed

    Zeghlache, Samir; Kara, Kamel; Saigaa, Djamel

    2015-11-01

    In this paper, a robust controller for a Six Degrees of Freedom (6 DOF) coaxial trirotor helicopter control is proposed in presence of defects in the system. A control strategy based on the coupling of the interval type-2 fuzzy logic control and sliding mode control technique are used to design a controller. The main purpose of this work is to eliminate the chattering phenomenon and guaranteeing the stability and the robustness of the system. In order to achieve this goal, interval type-2 fuzzy logic control has been used to generate the discontinuous control signal. The simulation results have shown that the proposed control strategy can greatly alleviate the chattering effect, and perform good reference tracking in presence of defects in the system. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. A web-based Tamsui River flood early-warning system with correction of real-time water stage using monitoring data

    NASA Astrophysics Data System (ADS)

    Liao, H. Y.; Lin, Y. J.; Chang, H. K.; Shang, R. K.; Kuo, H. C.; Lai, J. S.; Tan, Y. C.

    2017-12-01

    Taiwan encounters heavy rainfalls frequently. There are three to four typhoons striking Taiwan every year. To provide lead time for reducing flood damage, this study attempt to build a flood early-warning system (FEWS) in Tanshui River using time series correction techniques. The predicted rainfall is used as the input for the rainfall-runoff model. Then, the discharges calculated by the rainfall-runoff model is converted to the 1-D river routing model. The 1-D river routing model will output the simulating water stages in 487 cross sections for the future 48-hr. The downstream water stage at the estuary in 1-D river routing model is provided by storm surge simulation. Next, the water stages of 487 cross sections are corrected by time series model such as autoregressive (AR) model using real-time water stage measurements to improve the predicted accuracy. The results of simulated water stages are displayed on a web-based platform. In addition, the models can be performed remotely by any users with web browsers through a user interface. The on-line video surveillance images, real-time monitoring water stages, and rainfalls can also be shown on this platform. If the simulated water stage exceeds the embankments of Tanshui River, the alerting lights of FEWS will be flashing on the screen. This platform runs periodically and automatically to generate the simulation graphic data of flood water stages for flood disaster prevention and decision making.

  7. Two Stage Assessment of Thermal Hazard in An Underground Mine

    NASA Astrophysics Data System (ADS)

    Drenda, Jan; Sułkowski, Józef; Pach, Grzegorz; Różański, Zenon; Wrona, Paweł

    2016-06-01

    The results of research into the application of selected thermal indices of men's work and climate indices in a two stage assessment of climatic work conditions in underground mines have been presented in this article. The difference between these two kinds of indices was pointed out during the project entitled "The recruiting requirements for miners working in hot underground mine environments". The project was coordinated by The Institute of Mining Technologies at Silesian University of Technology. It was a part of a Polish strategic project: "Improvement of safety in mines" being financed by the National Centre of Research and Development. Climate indices are based only on physical parameters of air and their measurements. Thermal indices include additional factors which are strictly connected with work, e.g. thermal resistance of clothing, kind of work etc. Special emphasis has been put on the following indices - substitute Silesian temperature (TS) which is considered as the climatic index, and the thermal discomfort index (δ) which belongs to the thermal indices group. The possibility of the two stage application of these indices has been taken into consideration (preliminary and detailed estimation). Based on the examples it was proved that by the application of thermal hazard (detailed estimation) it is possible to avoid the use of additional technical solutions which would be necessary to reduce thermal hazard in particular work places according to the climate index. The threshold limit value for TS has been set, based on these results. It was shown that below TS = 24°C it is not necessary to perform detailed estimation.

  8. Use of leaning vanes in a two stage fan

    NASA Technical Reports Server (NTRS)

    Rao, G. V. R.; Digumarthi, R. V.

    1975-01-01

    The use of leaning vanes for tone noise reduction was examined in terms of their application in a typical two-stage high pressure ratio fan. In particular for stages designed with outlet guide vanes and zero swirl between stages, leaning the vanes of the first stage stator was studied, since increasing the number of vanes and the gap between stages do not provide the desired advantage. It was shown that noise reduction at higher harmonics of blade passing frequency can be obtained by leaning the vanes.

  9. First staging of two laser accelerators.

    PubMed

    Kimura, W D; van Steenbergen, A; Babzien, M; Ben-Zvi, I; Campbell, L P; Cline, D B; Dilley, C E; Gallardo, J C; Gottschalk, S C; He, P; Kusche, K P; Liu, Y; Pantell, R H; Pogorelsky, I V; Quimby, D C; Skaritka, J; Steinhauer, L C; Yakimenko, V

    2001-04-30

    Staging of two laser-driven, relativistic electron accelerators has been demonstrated for the first time in a proof-of-principle experiment, whereby two distinct and serial laser accelerators acted on an electron beam in a coherently cumulative manner. Output from a CO2 laser was split into two beams to drive two inverse free electron lasers (IFEL) separated by 2.3 m. The first IFEL served to bunch the electrons into approximately 3 fs microbunches, which were rephased with the laser wave in the second IFEL. This represents a crucial step towards the development of practical laser-driven electron accelerators.

  10. A C-band 55% PAE high gain two-stage power amplifier based on AlGaN/GaN HEMT

    NASA Astrophysics Data System (ADS)

    Zheng, Jia-Xin; Ma, Xiao-Hua; Lu, Yang; Zhao, Bo-Chao; Zhang, Hong-He; Zhang, Meng; Cao, Meng-Yi; Hao, Yue

    2015-10-01

    A C-band high efficiency and high gain two-stage power amplifier based on AlGaN/GaN high electron mobility transistor (HEMT) is designed and measured in this paper. The input and output impedances for the optimum power-added efficiency (PAE) are determined at the fundamental and 2nd harmonic frequency (f0 and 2f0). The harmonic manipulation networks are designed both in the driver stage and the power stage which manipulate the second harmonic to a very low level within the operating frequency band. Then the inter-stage matching network and the output power combining network are calculated to achieve a low insertion loss. So the PAE and the power gain is greatly improved. In an operation frequency range of 5.4 GHz-5.8 GHz in CW mode, the amplifier delivers a maximum output power of 18.62 W, with a PAE of 55.15% and an associated power gain of 28.7 dB, which is an outstanding performance. Project supported by the National Key Basic Research Program of China (Grant No. 2011CBA00606), Program for New Century Excellent Talents in University, China (Grant No. NCET-12-0915), and the National Natural Science Foundation of China (Grant No. 61334002).

  11. One-Stage versus Two-Stage Repair of Asymmetric Bilateral Cleft Lip: A 20-Year Retrospective Study of Clinical Outcome.

    PubMed

    Chung, Kyung Hoon; Lo, Lun-Jou

    2018-05-01

    Both one- and two-stage approaches have been widely used for patients with asymmetric bilateral cleft lip. There are insufficient long-term outcome data for comparison of these two methods. The purpose of this retrospective study was to compare the clinical outcome over the past 20 years. The senior author's (L.J.L.) database was searched for patients with asymmetric bilateral cleft lip from 1995 to 2015. Qualified patients were divided into two groups: one-stage and two-stage. The postoperative photographs of patients were evaluated subjectively by surgical professionals and laypersons. Ratios of the nasolabial region were calculated for objective analysis. Finally, the revision procedures in the nasolabial area were reviewed. Statistical analyses were performed. A total of 95 consecutive patients were qualified for evaluation. Average follow-up was 13.1 years. A two-stage method was used in 35 percent of the patients, and a one-stage approach was used in 65 percent. All underwent primary nasal reconstruction. Among the satisfaction rating scores, the one-stage repair was rated significantly higher than two-stage reconstruction (p = 0.0001). Long-term outcomes of the two-stage patients and the unrepaired mini-microform deformities were unsatisfactory according to both professional and nonprofessional evaluators. The revision rate was higher in patients with a greater-side complete cleft lip and palate as compared with those without palatal involvement. The results suggested that one-stage repair provided better results with regard to achieving a more symmetric and smooth lip and nose after primary reconstruction. The revision rate was slightly higher in the two-stage patient group. Therapeutic, III.

  12. Numerical simulation of polishing U-tube based on solid-liquid two-phase

    NASA Astrophysics Data System (ADS)

    Li, Jun-ye; Meng, Wen-qing; Wu, Gui-ling; Hu, Jing-lei; Wang, Bao-zuo

    2018-03-01

    As the advanced technology to solve the ultra-precision machining of small hole structure parts and complex cavity parts, the abrasive grain flow processing technology has the characteristics of high efficiency, high quality and low cost. So this technology in many areas of precision machining has an important role. Based on the theory of solid-liquid two-phase flow coupling, a solid-liquid two-phase MIXTURE model is used to simulate the abrasive flow polishing process on the inner surface of U-tube, and the temperature, turbulent viscosity and turbulent dissipation rate in the process of abrasive flow machining of U-tube were compared and analyzed under different inlet pressure. In this paper, the influence of different inlet pressure on the surface quality of the workpiece during abrasive flow machining is studied and discussed, which provides a theoretical basis for the research of abrasive flow machining process.

  13. How do gait frequency and serum-replacement interval affect polyethylene wear in knee-wear simulator tests?

    PubMed

    Reinders, Jörn; Sonntag, Robert; Kretzer, Jan Philippe

    2014-11-01

    Polyethylene wear (PE) is known to be a limiting factor in total joint replacements. However, a standardized wear test (e.g. ISO standard) can only replicate the complex in vivo loading condition in a simplified form. In this study, two different parameters were analyzed: (a) Bovine serum, as a substitute for synovial fluid, is typically replaced every 500,000 cycles. However, a continuous regeneration takes place in vivo. How does serum-replacement interval affect the wear rate of total knee replacements? (b) Patients with an artificial joint show reduced gait frequencies compared to standardized testing. What is the influence of a reduced frequency? Three knee wear tests were run: (a) reference test (ISO), (b) testing with a shortened lubricant replacement interval, (c) testing with reduced frequency. The wear behavior was determined based on gravimetric measurements and wear particle analysis. The results showed that the reduced test frequency only had a small effect on wear behavior. Testing with 1 Hz frequency is therefore a valid method for wear testing. However, testing with a shortened replacement interval nearly doubled the wear rate. Wear particle analysis revealed only small differences in wear particle size between the different tests. Wear particles were not linearly released within one replacement interval. The ISO standard should be revised to address the marked effects of lubricant replacement interval on wear rate.

  14. One-way ANOVA based on interval information

    NASA Astrophysics Data System (ADS)

    Hesamian, Gholamreza

    2016-08-01

    This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

  15. A coupled surface-water and ground-water flow model (MODBRANCH) for simulation of stream-aquifer interaction

    USGS Publications Warehouse

    Swain, Eric D.; Wexler, Eliezer J.

    1996-01-01

    Ground-water and surface-water flow models traditionally have been developed separately, with interaction between subsurface flow and streamflow either not simulated at all or accounted for by simple formulations. In areas with dynamic and hydraulically well-connected ground-water and surface-water systems, stream-aquifer interaction should be simulated using deterministic responses of both systems coupled at the stream-aquifer interface. Accordingly, a new coupled ground-water and surface-water model was developed by combining the U.S. Geological Survey models MODFLOW and BRANCH; the interfacing code is referred to as MODBRANCH. MODFLOW is the widely used modular three-dimensional, finite-difference ground-water model, and BRANCH is a one-dimensional numerical model commonly used to simulate unsteady flow in open- channel networks. MODFLOW was originally written with the River package, which calculates leakage between the aquifer and stream, assuming that the stream's stage remains constant during one model stress period. A simple streamflow routing model has been added to MODFLOW, but is limited to steady flow in rectangular, prismatic channels. To overcome these limitations, the BRANCH model, which simulates unsteady, nonuniform flow by solving the St. Venant equations, was restructured and incorporated into MODFLOW. Terms that describe leakage between stream and aquifer as a function of streambed conductance and differences in aquifer and stream stage were added to the continuity equation in BRANCH. Thus, leakage between the aquifer and stream can be calculated separately in each model, or leakages calculated in BRANCH can be used in MODFLOW. Total mass in the coupled models is accounted for and conserved. The BRANCH model calculates new stream stages for each time interval in a transient simulation based on upstream boundary conditions, stream properties, and initial estimates of aquifer heads. Next, aquifer heads are calculated in MODFLOW based on stream

  16. Correlation and Return Interval Analysis of Tree Rings Based Temperature and Precipitation Reconstructions

    NASA Astrophysics Data System (ADS)

    Bunde, A.; Ludescher, J.; Luterbacher, J.; von Storch, H.

    2012-04-01

    We analyze tree rings based summer temperature and precipitation reconstructions from Central Europe covering the past 2500y [1], by (i) autocorrelation functions, (ii) detrended fluctuation analysis (DFA2) and (iii) the Haar wavelet technique (WT2). We also study (iv) the PDFs of the return intervals for return periods of 5y, 10y, 20y, and 40y. All results provide evidence that the data cannot be described by an AR1 process, but are long-term correlated with a Hurst exponent H close to 1 for summer temperature data and around 0.9 for summer precipitation. These results, however, are not in agreement with neither observational data of the past two centuries nor millennium simulations with contemporary climate models, which both suggest H close to 0.65 for the temperature data and H close to 0.5 for the precipitation data. In particular the strong contrast in precipitation (highly correlated for the reconstructed data, white noise for the observational and model data) rises concerns on tree rings based climate reconstructions, which will have to be taken into account in future investigations. [1] Büntgen, U., Tegel, W., Nicolussi, K., McCormick, M., Frank, D., Trouet, V., Kaplan, J.O., Herzig, F., Heussner, K.-U., Wanner, H., Luterbacher, J., and Esper, J., 2011: 2500 Years of European Climate Variability and Human Susceptibility. SCIENCE, 331, 578-582.

  17. Influence of various chlorine additives on the partitioning of heavy metals during low-temperature two-stage fluidized bed incineration.

    PubMed

    Peng, Tzu-Huan; Lin, Chiou-Liang

    2014-12-15

    In this study, a pilot-scale low-temperature two-stage fluidized bed incinerator was evaluated for the control of heavy metal emissions using various chlorine (Cl) additives. Artificial waste containing heavy metals was selected to simulate municipal solid waste (MSW). Operating parameters considered included the first-stage combustion temperature, gas velocity, and different kinds of Cl additives. Results showed that the low-temperature two-stage fluidized bed reactor can be an effective system for the treatment of MSW because of its low NO(x), CO, HCl, and heavy metal emissions. The NO(x) and HCl emissions could be decreased by 42% and 70%, respectively. Further, the results showed that heavy metal emissions were reduced by bed material adsorption and filtration in the second stage. Regarding the Cl addition, although the Cl addition would reduce the metal capture in the first-stage sand bed, but those emitted metals could be effectively captured by the filtration of second stage. No matter choose what kind of additive, metal emissions in the low-temperature two-stage system are still lower than in a traditional high-temperature one-stage system. The results also showed that metal emissions depend not only on the combustion temperature but also on the physicochemical properties of the different metal species. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Stage boundary recognition in the Eastern Americas realm based on rugose corals

    USGS Publications Warehouse

    Oliver, W.A.

    2000-01-01

    Most Devonian stages contain characteristic coral assemblages but these tend to be geographically and facies limited and may or may not be useful for recognising stage boundaries. Within eastern North America, corals contribute to the recognition of two boundaries: the base of the Lochkovian (Silurian-Devonian boundary) and the base of the Eifelian (Lower-Middle Devonian Series boundary).

  19. On Two-Stage Multiple Comparison Procedures When There Are Unequal Sample Sizes in the First Stage.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1984-01-01

    Two stage multiple-comparison procedures give an exact solution to problems of power and Type I errors, but require equal sample sizes in the first stage. This paper suggests a method of evaluating the experimentwise Type I error probability when the first stage has unequal sample sizes. (Author/BW)

  20. Laser-driven three-stage heavy-ion acceleration from relativistic laser-plasma interaction.

    PubMed

    Wang, H Y; Lin, C; Liu, B; Sheng, Z M; Lu, H Y; Ma, W J; Bin, J H; Schreiber, J; He, X T; Chen, J E; Zepf, M; Yan, X Q

    2014-01-01

    A three-stage heavy ion acceleration scheme for generation of high-energy quasimonoenergetic heavy ion beams is investigated using two-dimensional particle-in-cell simulation and analytical modeling. The scheme is based on the interaction of an intense linearly polarized laser pulse with a compound two-layer target (a front heavy ion layer + a second light ion layer). We identify that, under appropriate conditions, the heavy ions preaccelerated by a two-stage acceleration process in the front layer can be injected into the light ion shock wave in the second layer for a further third-stage acceleration. These injected heavy ions are not influenced by the screening effect from the light ions, and an isolated high-energy heavy ion beam with relatively low-energy spread is thus formed. Two-dimensional particle-in-cell simulations show that ∼100MeV/u quasimonoenergetic Fe24+ beams can be obtained by linearly polarized laser pulses at intensities of 1.1×1021W/cm2.

  1. Experimental and modeling study of a two-stage pilot scale high solid anaerobic digester system.

    PubMed

    Yu, Liang; Zhao, Quanbao; Ma, Jingwei; Frear, Craig; Chen, Shulin

    2012-11-01

    This study established a comprehensive model to configure a new two-stage high solid anaerobic digester (HSAD) system designed for highly degradable organic fraction of municipal solid wastes (OFMSW). The HSAD reactor as the first stage was naturally separated into two zones due to biogas floatation and low specific gravity of solid waste. The solid waste was retained in the upper zone while only the liquid leachate resided in the lower zone of the HSAD reactor. Continuous stirred-tank reactor (CSTR) and advective-diffusive reactor (ADR) models were constructed in series to describe the whole system. Anaerobic digestion model No. 1 (ADM1) was used as reaction kinetics and incorporated into each reactor module. Compared with the experimental data, the simulation results indicated that the model was able to well predict the pH, volatile fatty acid (VFA) and biogas production. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Simulation of multi-stage nonlinear bone remodeling induced by fixed partial dentures of different configurations: a comparative clinical and numerical study.

    PubMed

    Liao, Zhipeng; Yoda, Nobuhiro; Chen, Junning; Zheng, Keke; Sasaki, Keiichi; Swain, Michael V; Li, Qing

    2017-04-01

    This paper aimed to develop a clinically validated bone remodeling algorithm by integrating bone's dynamic properties in a multi-stage fashion based on a four-year clinical follow-up of implant treatment. The configurational effects of fixed partial dentures (FPDs) were explored using a multi-stage remodeling rule. Three-dimensional real-time occlusal loads during maximum voluntary clenching were measured with a piezoelectric force transducer and were incorporated into a computerized tomography-based finite element mandibular model. Virtual X-ray images were generated based on simulation and statistically correlated with clinical data using linear regressions. The strain energy density-driven remodeling parameters were regulated over the time frame considered. A linear single-stage bone remodeling algorithm, with a single set of constant remodeling parameters, was found to poorly fit with clinical data through linear regression (low [Formula: see text] and R), whereas a time-dependent multi-stage algorithm better simulated the remodeling process (high [Formula: see text] and R) against the clinical results. The three-implant-supported and distally cantilevered FPDs presented noticeable and continuous bone apposition, mainly adjacent to the cervical and apical regions. The bridged and mesially cantilevered FPDs showed bone resorption or no visible bone formation in some areas. Time-dependent variation of bone remodeling parameters is recommended to better correlate remodeling simulation with clinical follow-up. The position of FPD pontics plays a critical role in mechanobiological functionality and bone remodeling. Caution should be exercised when selecting the cantilever FPD due to the risk of overloading bone resorption.

  3. Return volatility interval analysis of stock indexes during a financial crash

    NASA Astrophysics Data System (ADS)

    Li, Wei-Shen; Liaw, Sy-Sang

    2015-09-01

    We investigate the interval between return volatilities above a certain threshold q for 10 countries data sets during the 2008/2009 global financial crisis, and divide these data into several stages according to stock price tendencies: plunging stage (stage 1), fluctuating or rebounding stage (stage 2) and soaring stage (stage 3). For different thresholds q, the cumulative distribution function always satisfies a power law tail distribution. We find the absolute value of the power-law exponent is lowest in stage 1 for various types of markets, and increases monotonically from stage 1 to stage 3 in emerging markets. The fractal dimension properties of the return volatility interval series provide some surprising results. We find that developed markets have strong persistence and transform to weaker correlation in the plunging and soaring stages. In contrast, emerging markets fail to exhibit such a transformation, but rather show a constant-correlation behavior with the recurrence of extreme return volatility in corresponding stages during a crash. We believe this long-memory property found in recurrence-interval series, especially for developed markets, plays an important role in volatility clustering.

  4. Fate of dissolved organic nitrogen in two stage trickling filter process.

    PubMed

    Simsek, Halis; Kasi, Murthy; Wadhawan, Tanush; Bye, Christopher; Blonigen, Mark; Khan, Eakalak

    2012-10-15

    Dissolved organic nitrogen (DON) represents a significant portion of nitrogen in the final effluent of wastewater treatment plants (WWTPs). Biodegradable portion of DON (BDON) can support algal growth and/or consume dissolved oxygen in the receiving waters. The fate of DON and BDON has not been studied for trickling filter WWTPs. DON and BDON data were collected along the treatment train of a WWTP with a two-stage trickling filter process. DON concentrations in the influent and effluent were 27% and 14% of total dissolved nitrogen (TDN). The plant removed about 62% and 72% of the influent DON and BDON mainly by the trickling filters. The final effluent BDON values averaged 1.8 mg/L. BDON was found to be between 51% and 69% of the DON in raw wastewater and after various treatment units. The fate of DON and BDON through the two-stage trickling filter treatment plant was modeled. The BioWin v3.1 model was successfully applied to simulate ammonia, nitrite, nitrate, TDN, DON and BDON concentrations along the treatment train. The maximum growth rates for ammonia oxidizing bacteria (AOB) and nitrite oxidizing bacteria, and AOB half saturation constant influenced ammonia and nitrate output results. Hydrolysis and ammonification rates influenced all of the nitrogen species in the model output, including BDON. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Extraction and LOD control of colored interval volumes

    NASA Astrophysics Data System (ADS)

    Miyamura, Hiroko N.; Takeshima, Yuriko; Fujishiro, Issei; Saito, Takafumi

    2005-03-01

    Interval volume serves as a generalized isosurface and represents a three-dimensional subvolume for which the associated scalar filed values lie within a user-specified closed interval. In general, it is not an easy task for novices to specify the scalar field interval corresponding to their ROIs. In order to extract interval volumes from which desirable geometric features can be mined effectively, we propose a suggestive technique which extracts interval volumes automatically based on the global examination of the field contrast structure. Also proposed here is a simplification scheme for decimating resultant triangle patches to realize efficient transmission and rendition of large-scale interval volumes. Color distributions as well as geometric features are taken into account to select best edges to be collapsed. In addition, when a user wants to selectively display and analyze the original dataset, the simplified dataset is restructured to the original quality. Several simulated and acquired datasets are used to demonstrate the effectiveness of the present methods.

  6. Combining area-based and individual-level data in the geostatistical mapping of late-stage cancer incidence.

    PubMed

    Goovaerts, Pierre

    2009-01-01

    This paper presents a geostatistical approach to incorporate individual-level data (e.g. patient residences) and area-based data (e.g. rates recorded at census tract level) into the mapping of late-stage cancer incidence, with an application to breast cancer in three Michigan counties. Spatial trends in cancer incidence are first estimated from census data using area-to-point binomial kriging. This prior model is then updated using indicator kriging and individual-level data. Simulation studies demonstrate the benefits of this two-step approach over methods (kernel density estimation and indicator kriging) that process only residence data.

  7. Multifunctional two-stage riser fluid catalytic cracking process.

    PubMed

    Zhang, Jinhong; Shan, Honghong; Chen, Xiaobo; Li, Chunyi; Yang, Chaohe

    This paper described the discovering process of some shortcomings of the conventional fluid catalytic cracking (FCC) process and the proposed two-stage riser (TSR) FCC process for decreasing dry gas and coke yields and increasing light oil yield, which has been successfully applied in 12 industrial units. Furthermore, the multifunctional two-stage riser (MFT) FCC process proposed on the basis of the TSR FCC process was described, which were carried out by the optimization of reaction conditions for fresh feedstock and cycle oil catalytic cracking, respectively, by the coupling of cycle oil cracking and light FCC naphtha upgrading processes in the second-stage riser, and the specially designed reactor for further reducing the olefin content of gasoline. The pilot test showed that it can further improve the product quality, increase the diesel yield, and enhance the conversion of heavy oil.

  8. Treatment of corn ethanol distillery wastewater using two-stage anaerobic digestion.

    PubMed

    Ráduly, B; Gyenge, L; Szilveszter, Sz; Kedves, A; Crognale, S

    In this study the mesophilic two-stage anaerobic digestion (AD) of corn bioethanol distillery wastewater is investigated in laboratory-scale reactors. Two-stage AD technology separates the different sub-processes of the AD in two distinct reactors, enabling the use of optimal conditions for the different microbial consortia involved in the different process phases, and thus allowing for higher applicable organic loading rates (OLRs), shorter hydraulic retention times (HRTs) and better conversion rates of the organic matter, as well as higher methane content of the produced biogas. In our experiments the reactors have been operated in semi-continuous phase-separated mode. A specific methane production of 1,092 mL/(L·d) has been reached at an OLR of 6.5 g TCOD/(L·d) (TCOD: total chemical oxygen demand) and a total HRT of 21 days (5.7 days in the first-stage, and 15.3 days in the second-stage reactor). Nonetheless the methane concentration in the second-stage reactor was very high (78.9%); the two-stage AD outperformed the reference single-stage AD (conducted at the same reactor loading rate and retention time) by only a small margin in terms of volumetric methane production rate. This makes questionable whether the higher methane content of the biogas counterbalances the added complexity of the two-stage digestion.

  9. Number of pins in two-stage stratified sampling for estimating herbage yield

    Treesearch

    William G. O' Regan; C. Eugene Conrad

    1975-01-01

    In a two-stage stratified procedure for sampling herbage yield, plots are stratified by a pin frame in stage one, and clipped. In stage two, clippings from selected plots are sorted, dried, and weighed. Sample size and distribution of plots between the two stages are determined by equations. A way to compute the effect of number of pins on the variance of estimated...

  10. Two-Stage Parameter Estimation in Confined Costal Aquifers

    NASA Astrophysics Data System (ADS)

    Hsu, N.

    2003-12-01

    Using field observations of tidal level and piezometric head at an observation well, this research develops a two-stage parameter estimation approach for estimating the hydraulic conductivity (T) and storage coefficient (S) of a confined aquifer in a costal area. While the y-axis coincides with the coastline, the x-axis extends from zero to infinity and, therefore, the domain of the aquifer is assumed to be a half plane. Other assumptions include homogeneity, isotropy and constant thickness of the aquifer, and zero initial head distribution. In the first stage, fluctuations of the tidal level and piezometric head at the observation well are collected simultaneously without the influence of pumping. Fourier spectra analysis is used to find the autocorrelation and crosscorrelation of the two sets of observations as well as the phase vs. frequency function. The tidal efficiency and time delay can then be computed. The analytical solution of Ferris (1951) is then used to compute the ratio of T/S. In the second stage, the system is stressed with pumping and observations of the tidal level and piezometric head at the observation well are collected simultaneously. The effect of tide to the observation well without pumping can be computed by the analytical solution of Ferris (1951) based upon the identified ratio of T/S and is deducted from the piezometric head observations to obtain the updated piezometric head. Theis equation coupled with the method of image is then applied to the updated piezometric head to obtain the T and S values. The developed approach is applied to a hypothetical aquifer. The results obtained show convergence of the approach. The robustness of the developed approach is also demonstrated by using noise-corrupted observations.

  11. Effects of a commonly used glyphosate-based herbicide formulation on early developmental stages of two anuran species.

    PubMed

    Wagner, Norman; Müller, Hendrik; Viertel, Bruno

    2017-01-01

    Environmental contamination, especially due to the increasing use of pesticides, is suggested to be one out of six main reasons for the global amphibian decline. Adverse effects of glyphosate-based herbicides on amphibians have been already discussed in several studies with different conclusions, especially regarding sublethal effects at environmentally relevant concentrations. Therefore, we studied the acute toxic effects (mortality, growth, and morphological changes) of the commonly used glyphosate-based herbicide formulation Roundup® UltraMax on early aquatic developmental stages of two anuran species with different larval types (obligate vs. facultative filtrating suspension feeders), the African clawed frog (Xenopus laevis) and the Mediterranean painted frog (Discoglossus pictus). While X. laevis is an established anuran model organism in amphibian toxicological studies, we aim to establish D. pictus as another model for species with facultative filtrating larvae. A special focus of the present study lies on malformations in X. laevis embryos, which were investigated using histological preparations. In general, embryos and larvae of X. laevis reacted more sensitive concerning lethal effects compared to early developmental stages of D. pictus. It was suggested, that especially the different morphology of their filter apparatus and the higher volume of water pumped through the buccopharynx of X. laevis larvae lead to higher exposure to the formulation. The test substance induced similar lethal effects in D. pictus larvae as it does in the teleost standard test organism used in pesticide approval, the rainbow trout (Oncorhynchus mykiss), whereas embryos of both species are apparently more tolerant and, conversely, X. laevis larvae about two times more sensitive. In both species, early larvae always reacted significantly more sensitive than embryos. Exposure to the test substance increased malformation rates in embryos of both species in a concentration

  12. Computer simulation on the cooperation of functional molecules during the early stages of evolution.

    PubMed

    Ma, Wentao; Hu, Jiming

    2012-01-01

    It is very likely that life began with some RNA (or RNA-like) molecules, self-replicating by base-pairing and exhibiting enzyme-like functions that favored the self-replication. Different functional molecules may have emerged by favoring their own self-replication at different aspects. Then, a direct route towards complexity/efficiency may have been through the coexistence/cooperation of these molecules. However, the likelihood of this route remains quite unclear, especially because the molecules would be competing for limited common resources. By computer simulation using a Monte-Carlo model (with "micro-resolution" at the level of nucleotides and membrane components), we show that the coexistence/cooperation of these molecules can occur naturally, both in a naked form and in a protocell form. The results of the computer simulation also lead to quite a few deductions concerning the environment and history in the scenario. First, a naked stage (with functional molecules catalyzing template-replication and metabolism) may have occurred early in evolution but required high concentration and limited dispersal of the system (e.g., on some mineral surface); the emergence of protocells enabled a "habitat-shift" into bulk water. Second, the protocell stage started with a substage of "pseudo-protocells", with functional molecules catalyzing template-replication and metabolism, but still missing the function involved in the synthesis of membrane components, the emergence of which would lead to a subsequent "true-protocell" substage. Third, the initial unstable membrane, composed of prebiotically available fatty acids, should have been superseded quite early by a more stable membrane (e.g., composed of phospholipids, like modern cells). Additionally, the membrane-takeover probably occurred at the transition of the two substages of the protocells. The scenario described in the present study should correspond to an episode in early evolution, after the emergence of single

  13. Stabilizing effect of cannibalism in a two stages population model.

    PubMed

    Rault, Jonathan; Benoît, Eric; Gouzé, Jean-Luc

    2013-03-01

    In this paper we build a prey-predator model with discrete weight structure for the predator. This model will conserve the number of individuals and the biomass and both growth and reproduction of the predator will depend on the food ingested. Moreover the model allows cannibalism which means that the predator can eat the prey but also other predators. We will focus on a simple version with two weight classes or stage (larvae and adults) and present some general mathematical results. In the last part, we will assume that the dynamics of the prey is fast compared to the predator's one to go further in the results and eventually conclude that under some conditions, cannibalism can stabilize the system: more precisely, an unstable equilibrium without cannibalism will become almost globally stable with some cannibalism. Some numerical simulations are done to illustrate this result.

  14. Diagnosis of Persistent Infection in Prosthetic Two-Stage Exchange: Evaluation of the Effect of Sonication on Antibiotic Release from Bone Cement Spacers.

    PubMed

    Mariaux, Sandrine; Furustrand Tafin, Ulrika; Borens, Olivier

    2018-01-01

    Introduction : When treating periprosthetic joint infection with a two-stage procedure, antibiotic-impregnated spacers can be used in the interval between prosthetic removal and reimplantation. In our experience, cultures of sonicated spacers are most often negative. The objective of the study was to assess whether that sonication causes an elution of antibiotics, leading to elevated antibiotic concentrations in the sonication fluid inhibiting bacterial growth and thus causing false-negative cultures. Methods : A prospective monocentric study was performed from September 2014 to March 2016. Inclusion criteria were a two-stage procedure for prosthetic infection and agreement of the patient to participate in the study. Spacers were made of gentamicin-containing cement to which tobramycin and vancomycin were added. Antibiotic concentrations in the sonication fluid were determined by mass-spectometry (LC-MS). Results : 30 patients were identified (15 hip and 14 knee and 1 ankle arthroplasties). No cases of culture positive sonicated spacer fluid were observed in our serie. In the sonication fluid median concentrations of 13.2µg/ml, 392 µg/ml and 16.6 µg/ml were detected for vancomycin, tobramycin and gentamicin, respectively. According to the European Committee on antimicrobial susceptibility testing (EUCAST), these concentrations released from cement spacer during sonication are higher than the minimal inhibitory concentrations (MICs) for most bacteria relevant in prosthetic joint infections. Conclusion: Spacer sonication cultures remained sterile in all of our cases. Elevated concentrations of antibiotics released during sonication could explain partly negative-cultured sonicated spacers. Indeed, the absence of antibiotic free interval during the two-stages can also contribute to false-negative spacers sonicated cultures.

  15. Performance evaluation of an agent-based occupancy simulation model

    DOE PAGES

    Luo, Xuan; Lam, Khee Poh; Chen, Yixing; ...

    2017-01-17

    Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less

  16. Performance evaluation of an agent-based occupancy simulation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Xuan; Lam, Khee Poh; Chen, Yixing

    Occupancy is an important factor driving building performance. Static and homogeneous occupant schedules, commonly used in building performance simulation, contribute to issues such as performance gaps between simulated and measured energy use in buildings. Stochastic occupancy models have been recently developed and applied to better represent spatial and temporal diversity of occupants in buildings. However, there is very limited evaluation of the usability and accuracy of these models. This study used measured occupancy data from a real office building to evaluate the performance of an agent-based occupancy simulation model: the Occupancy Simulator. The occupancy patterns of various occupant types weremore » first derived from the measured occupant schedule data using statistical analysis. Then the performance of the simulation model was evaluated and verified based on (1) whether the distribution of observed occupancy behavior patterns follows the theoretical ones included in the Occupancy Simulator, and (2) whether the simulator can reproduce a variety of occupancy patterns accurately. Results demonstrated the feasibility of applying the Occupancy Simulator to simulate a range of occupancy presence and movement behaviors for regular types of occupants in office buildings, and to generate stochastic occupant schedules at the room and individual occupant levels for building performance simulation. For future work, model validation is recommended, which includes collecting and using detailed interval occupancy data of all spaces in an office building to validate the simulated occupant schedules from the Occupancy Simulator.« less

  17. Fault diagnosis based on continuous simulation models

    NASA Technical Reports Server (NTRS)

    Feyock, Stefan

    1987-01-01

    The results are described of an investigation of techniques for using continuous simulation models as basis for reasoning about physical systems, with emphasis on the diagnosis of system faults. It is assumed that a continuous simulation model of the properly operating system is available. Malfunctions are diagnosed by posing the question: how can we make the model behave like that. The adjustments that must be made to the model to produce the observed behavior usually provide definitive clues to the nature of the malfunction. A novel application of Dijkstra's weakest precondition predicate transformer is used to derive the preconditions for producing the required model behavior. To minimize the size of the search space, an envisionment generator based on interval mathematics was developed. In addition to its intended application, the ability to generate qualitative state spaces automatically from quantitative simulations proved to be a fruitful avenue of investigation in its own right. Implementations of the Dijkstra transform and the envisionment generator are reproduced in the Appendix.

  18. [Investigation of reference intervals of blood gas and acid-base analysis assays in China].

    PubMed

    Zhang, Lu; Wang, Wei; Wang, Zhiguo

    2015-10-01

    system groups and between most of two instrument system groups in all assays. The difference of reference intervals of blood gas and acid-base analysis assays used in China laboratories is moderate, which is better than other specialties in clinical laboratories.

  19. The Two-Word Stage: Motivated by Linguistic or Cognitive Constraints?

    PubMed Central

    Berk, Stephanie; Lillo-Martin, Diane

    2012-01-01

    Child development researchers often discuss a “two-word” stage during language acquisition. However, there is still debate over whether the existence of this stage reflects primarily cognitive or linguistic constraints. Analyses of longitudinal data from two Deaf children, Mei and Cal, not exposed to an accessible first language (American Sign Language - ASL) until the age of 6 years, suggest that a linguistic constraint is observed when cognition is relatively spared. These older children acquiring a first language after delayed exposure exhibit aspects of a two-word stage of language development. Results from intelligence assessments, achievement tests, drawing tasks, and qualitative cognitive analyses show that Mei and Cal are at least of average intelligence and ability. However, results from language analyses clearly show differences from both age peers and younger native signers in the early two-word stage, providing new insights into the nature of this phase of language development. PMID:22475876

  20. Two-stage reimplantation for treating prosthetic shoulder infections.

    PubMed

    Sabesan, Vani J; Ho, Jason C; Kovacevic, David; Iannotti, Joseph P

    2011-09-01

    Two-stage reimplantation for prosthetic joint infection reportedly has the lowest risk for recurrent infection. Most studies to date have evaluated revision surgery for infection using an anatomic prosthetic. As compared with anatomic prostheses, reverse total shoulder arthroplasty is reported to have a higher rate of infection. We determined reinfection rates, functional improvement, types and rates of complications, and influence of rotator cuff tissue on function for two-stage reimplantation for prosthetic joint infection treated with reverse shoulder arthroplasty. We retrospectively reviewed 27 patients treated with a two-stage reimplantation for prosthetic shoulder infection using a uniform protocol for management of infection; of these, 17 had reverse shoulder arthroplasty at second-stage surgery. Types of organisms cultured, recurrence rates, complications, function, and radiographic followup were reviewed for all patients. One of the 17 patients had recurrence of infection. The mean (± SD) Penn shoulder scores for patients treated with reverse shoulder arthroplasty improved from 24.9 ± 22.3 to 66.4 ± 20.8. The average motion at last followup was 123° ± 33° of forward flexion and 26° ± 8° of external rotation in patients treated with a reverse shoulder arthroplasty. The major complication rate was 35% in reverse shoulder arthroplasty, with five dislocations and one reinfection. There was no difference in final Penn score between patients with and without external rotation weakness. Shoulder function and pain improved in patients treated with a second-stage reimplantation of a reverse prosthesis and the reinfection rate was low. Level IV, case series. See Guidelines for Authors for a complete description of levels of evidence.

  1. A modified varying-stage adaptive phase II/III clinical trial design.

    PubMed

    Dong, Gaohong; Vandemeulebroecke, Marc

    2016-07-01

    Conventionally, adaptive phase II/III clinical trials are carried out with a strict two-stage design. Recently, a varying-stage adaptive phase II/III clinical trial design has been developed. In this design, following the first stage, an intermediate stage can be adaptively added to obtain more data, so that a more informative decision can be made. Therefore, the number of further investigational stages is determined based upon data accumulated to the interim analysis. This design considers two plausible study endpoints, with one of them initially designated as the primary endpoint. Based on interim results, another endpoint can be switched as the primary endpoint. However, in many therapeutic areas, the primary study endpoint is well established. Therefore, we modify this design to consider one study endpoint only so that it may be more readily applicable in real clinical trial designs. Our simulations show that, the same as the original design, this modified design controls the Type I error rate, and the design parameters such as the threshold probability for the two-stage setting and the alpha allocation ratio in the two-stage setting versus the three-stage setting have a great impact on the design characteristics. However, this modified design requires a larger sample size for the initial stage, and the probability of futility becomes much higher when the threshold probability for the two-stage setting gets smaller. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Tunable and switchable all-fiber comb filter using a PBS-based two-stage cascaded Mach-Zehnder interferometer

    NASA Astrophysics Data System (ADS)

    Luo, Zhi-Chao; Luo, Ai-Ping; Xu, Wen-Cheng

    2011-08-01

    We propose and demonstrate a novel tunable and switchable all-fiber comb filter by employing a polarization beam splitter (PBS)-based two-stage cascaded Mach-Zehnder (M-Z) interferometer. The proposed comb filter consists of a rotatable polarizer, a fiber PBS, a non-3-dB coupler and a 3-dB coupler. By simply adjusting the polarization state of the input light, the dual-function of channel spacing tunable and wavelength switchable (interleaving) operations can be efficiently obtained. The theoretical analysis is verified by the experimental results. A comb filter with both the channel spacing tunable from 0.18 nm to 0.36 nm and the wavelength switchable functions is experimentally demonstrated.

  3. Final Report on Two-Stage Fast Spectrum Fuel Cycle Options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Won Sik; Lin, C. S.; Hader, J. S.

    2016-01-30

    This report presents the performance characteristics of twotwo-stage” fast spectrum fuel cycle options proposed to enhance uranium resource utilization and to reduce nuclear waste generation. One is a two-stage fast spectrum fuel cycle option of continuous recycle of plutonium (Pu) in a fast reactor (FR) and subsequent burning of minor actinides (MAs) in an accelerator-driven system (ADS). The first stage is a sodium-cooled FR fuel cycle starting with low-enriched uranium (LEU) fuel; at the equilibrium cycle, the FR is operated using the recovered Pu and natural uranium without supporting LEU. Pu and uranium (U) are co-extracted from the dischargedmore » fuel and recycled in the first stage, and the recovered MAs are sent to the second stage. The second stage is a sodium-cooled ADS in which MAs are burned in an inert matrix fuel form. The discharged fuel of ADS is reprocessed, and all the recovered heavy metals (HMs) are recycled into the ADS. The other is a two-stage FR/ADS fuel cycle option with MA targets loaded in the FR. The recovered MAs are not directly sent to ADS, but partially incinerated in the FR in order to reduce the amount of MAs to be sent to the ADS. This is a heterogeneous recycling option of transuranic (TRU) elements« less

  4. Microvascular anastomosis simulation using a chicken thigh model: Interval versus massed training.

    PubMed

    Schoeff, Stephen; Hernandez, Brian; Robinson, Derek J; Jameson, Mark J; Shonka, David C

    2017-11-01

    To compare the effectiveness of massed versus interval training when teaching otolaryngology residents microvascular suturing on a validated microsurgical model. Otolaryngology residents were placed into interval (n = 7) or massed (n = 7) training groups. The interval group performed three separate 30-minute practice sessions separated by at least 1 week, and the massed group performed a single 90-minute practice session. Both groups viewed a video demonstration and recorded a pretest prior to the first training session. A post-test was administered following the last practice session. At an academic medical center, 14 otolaryngology residents were assigned using stratified randomization to interval or massed training. Blinded evaluators graded performance using a validated microvascular Objective Structured Assessment of Technical Skill tool. The tool is comprised of two major components: task-specific score (TSS) and global rating scale (GRS). Participants also received pre- and poststudy surveys to compare subjective confidence in multiple aspects of microvascular skill acquisition. Overall, all residents showed increased TSS and GRS on post- versus pretest. After completion of training, the interval group had a statistically significant increase in both TSS and GRS, whereas the massed group's increase was not significant. Residents in both groups reported significantly increased levels of confidence after completion of the study. Self-directed learning using a chicken thigh artery model may benefit microsurgical skills, competence, and confidence for resident surgeons. Interval training results in significant improvement in early development of microvascular anastomosis skills, whereas massed training does not. NA. Laryngoscope, 127:2490-2494, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  5. Two-way coupled SPH and particle level set fluid simulation.

    PubMed

    Losasso, Frank; Talton, Jerry; Kwatra, Nipun; Fedkiw, Ronald

    2008-01-01

    Grid-based methods have difficulty resolving features on or below the scale of the underlying grid. Although adaptive methods (e.g. RLE, octrees) can alleviate this to some degree, separate techniques are still required for simulating small-scale phenomena such as spray and foam, especially since these more diffuse materials typically behave quite differently than their denser counterparts. In this paper, we propose a two-way coupled simulation framework that uses the particle level set method to efficiently model dense liquid volumes and a smoothed particle hydrodynamics (SPH) method to simulate diffuse regions such as sprays. Our novel SPH method allows us to simulate both dense and diffuse water volumes, fully incorporates the particles that are automatically generated by the particle level set method in under-resolved regions, and allows for two way mixing between dense SPH volumes and grid-based liquid representations.

  6. Signal-on electrochemiluminescence biosensor for microRNA-319a detection based on two-stage isothermal strand-displacement polymerase reaction.

    PubMed

    Wang, Minghui; Zhou, Yunlei; Yin, Huanshun; Jiang, Wenjing; Wang, Haiyan; Ai, Shiyun

    2018-06-01

    MicroRNAs play crucial role in regulating gene expression in organism, thus it is very necessary to exploit an efficient method for the sensitive and specific detection of microRNA. Herein, a signal-on electrochemiluminescence biosensor was fabricated for microRNA-319a detection based on two-stage isothermal strand-displacement polymerase reaction (ISDPR). In the presence of target microRNA, amounts of trigger DNA could be generated by the first ISDPR. Then, the trigger DNA and the primer hybridized simultaneously with the hairpin probe to open the stem of the probe, and then the ECL signal will be emitted. In the presence of phi29 DNA polymerase and dNTPs, the trigger DNA could be displaced to initiate a new cycle which was the second ISDPR. Due to the two-stage amplification, this method presented excellent detection sensitivity with a low detection limit of 0.14 fM. Moreover, the applicability of the developed method was demonstrated by detecting the change of microRNA-319a content in the leaves of rice seedlings after the rice seeds were incubated with chemical mutagen of ethyl methanesulfonate. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Construction of prediction intervals for Palmer Drought Severity Index using bootstrap

    NASA Astrophysics Data System (ADS)

    Beyaztas, Ufuk; Bickici Arikan, Bugrayhan; Beyaztas, Beste Hamiye; Kahya, Ercan

    2018-04-01

    In this study, we propose an approach based on the residual-based bootstrap method to obtain valid prediction intervals using monthly, short-term (three-months) and mid-term (six-months) drought observations. The effects of North Atlantic and Arctic Oscillation indexes on the constructed prediction intervals are also examined. Performance of the proposed approach is evaluated for the Palmer Drought Severity Index (PDSI) obtained from Konya closed basin located in Central Anatolia, Turkey. The finite sample properties of the proposed method are further illustrated by an extensive simulation study. Our results revealed that the proposed approach is capable of producing valid prediction intervals for future PDSI values.

  8. Confidence intervals for distinguishing ordinal and disordinal interactions in multiple regression.

    PubMed

    Lee, Sunbok; Lei, Man-Kit; Brody, Gene H

    2015-06-01

    Distinguishing between ordinal and disordinal interaction in multiple regression is useful in testing many interesting theoretical hypotheses. Because the distinction is made based on the location of a crossover point of 2 simple regression lines, confidence intervals of the crossover point can be used to distinguish ordinal and disordinal interactions. This study examined 2 factors that need to be considered in constructing confidence intervals of the crossover point: (a) the assumption about the sampling distribution of the crossover point, and (b) the possibility of abnormally wide confidence intervals for the crossover point. A Monte Carlo simulation study was conducted to compare 6 different methods for constructing confidence intervals of the crossover point in terms of the coverage rate, the proportion of true values that fall to the left or right of the confidence intervals, and the average width of the confidence intervals. The methods include the reparameterization, delta, Fieller, basic bootstrap, percentile bootstrap, and bias-corrected accelerated bootstrap methods. The results of our Monte Carlo simulation study suggest that statistical inference using confidence intervals to distinguish ordinal and disordinal interaction requires sample sizes more than 500 to be able to provide sufficiently narrow confidence intervals to identify the location of the crossover point. (c) 2015 APA, all rights reserved).

  9. Self-reconfigurable ship fluid-network modeling for simulation-based design

    NASA Astrophysics Data System (ADS)

    Moon, Kyungjin

    Our world is filled with large-scale engineering systems, which provide various services and conveniences in our daily life. A distinctive trend in the development of today's large-scale engineering systems is the extensive and aggressive adoption of automation and autonomy that enable the significant improvement of systems' robustness, efficiency, and performance, with considerably reduced manning and maintenance costs, and the U.S. Navy's DD(X), the next-generation destroyer program, is considered as an extreme example of such a trend. This thesis pursues a modeling solution for performing simulation-based analysis in the conceptual or preliminary design stage of an intelligent, self-reconfigurable ship fluid system, which is one of the concepts of DD(X) engineering plant development. Through the investigations on the Navy's approach for designing a more survivable ship system, it is found that the current naval simulation-based analysis environment is limited by the capability gaps in damage modeling, dynamic model reconfiguration, and simulation speed of the domain specific models, especially fluid network models. As enablers of filling these gaps, two essential elements were identified in the formulation of the modeling method. The first one is the graph-based topological modeling method, which will be employed for rapid model reconstruction and damage modeling, and the second one is the recurrent neural network-based, component-level surrogate modeling method, which will be used to improve the affordability and efficiency of the modeling and simulation (M&S) computations. The integration of the two methods can deliver computationally efficient, flexible, and automation-friendly M&S which will create an environment for more rigorous damage analysis and exploration of design alternatives. As a demonstration for evaluating the developed method, a simulation model of a notional ship fluid system was created, and a damage analysis was performed. Next, the models

  10. Conventional 3D staging PET/CT in CT simulation for lung cancer: impact of rigid and deformable target volume alignments for radiotherapy treatment planning.

    PubMed

    Hanna, G G; Van Sörnsen De Koste, J R; Carson, K J; O'Sullivan, J M; Hounsell, A R; Senan, S

    2011-10-01

    Positron emission tomography (PET)/CT scans can improve target definition in radiotherapy for non-small cell lung cancer (NSCLC). As staging PET/CT scans are increasingly available, we evaluated different methods for co-registration of staging PET/CT data to radiotherapy simulation (RTP) scans. 10 patients underwent staging PET/CT followed by RTP PET/CT. On both scans, gross tumour volumes (GTVs) were delineated using CT (GTV(CT)) and PET display settings. Four PET-based contours (manual delineation, two threshold methods and a source-to-background ratio method) were delineated. The CT component of the staging scan was co-registered using both rigid and deformable techniques to the CT component of RTP PET/CT. Subsequently rigid registration and deformation warps were used to transfer PET and CT contours from the staging scan to the RTP scan. Dice's similarity coefficient (DSC) was used to assess the registration accuracy of staging-based GTVs following both registration methods with the GTVs delineated on the RTP PET/CT scan. When the GTV(CT) delineated on the staging scan after both rigid registration and deformation was compared with the GTV(CT)on the RTP scan, a significant improvement in overlap (registration) using deformation was observed (mean DSC 0.66 for rigid registration and 0.82 for deformable registration, p = 0.008). A similar comparison for PET contours revealed no significant improvement in overlap with the use of deformable registration. No consistent improvements in similarity measures were observed when deformable registration was used for transferring PET-based contours from a staging PET/CT. This suggests that currently the use of rigid registration remains the most appropriate method for RTP in NSCLC.

  11. Multi-channel EEG-based sleep stage classification with joint collaborative representation and multiple kernel learning.

    PubMed

    Shi, Jun; Liu, Xiao; Li, Yan; Zhang, Qi; Li, Yingjie; Ying, Shihui

    2015-10-30

    Electroencephalography (EEG) based sleep staging is commonly used in clinical routine. Feature extraction and representation plays a crucial role in EEG-based automatic classification of sleep stages. Sparse representation (SR) is a state-of-the-art unsupervised feature learning method suitable for EEG feature representation. Collaborative representation (CR) is an effective data coding method used as a classifier. Here we use CR as a data representation method to learn features from the EEG signal. A joint collaboration model is established to develop a multi-view learning algorithm, and generate joint CR (JCR) codes to fuse and represent multi-channel EEG signals. A two-stage multi-view learning-based sleep staging framework is then constructed, in which JCR and joint sparse representation (JSR) algorithms first fuse and learning the feature representation from multi-channel EEG signals, respectively. Multi-view JCR and JSR features are then integrated and sleep stages recognized by a multiple kernel extreme learning machine (MK-ELM) algorithm with grid search. The proposed two-stage multi-view learning algorithm achieves superior performance for sleep staging. With a K-means clustering based dictionary, the mean classification accuracy, sensitivity and specificity are 81.10 ± 0.15%, 71.42 ± 0.66% and 94.57 ± 0.07%, respectively; while with the dictionary learned using the submodular optimization method, they are 80.29 ± 0.22%, 71.26 ± 0.78% and 94.38 ± 0.10%, respectively. The two-stage multi-view learning based sleep staging framework outperforms all other classification methods compared in this work, while JCR is superior to JSR. The proposed multi-view learning framework has the potential for sleep staging based on multi-channel or multi-modality polysomnography signals. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts.

    PubMed

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-27

    The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future.

  13. Maximally efficient two-stage screening: Determining intellectual disability in Taiwanese military conscripts

    PubMed Central

    Chien, Chia-Chang; Huang, Shu-Fen; Lung, For-Wey

    2009-01-01

    Objective: The purpose of this study was to apply a two-stage screening method for the large-scale intelligence screening of military conscripts. Methods: We collected 99 conscripted soldiers whose educational levels were senior high school level or lower to be the participants. Every participant was required to take the Wisconsin Card Sorting Test (WCST) and the Wechsler Adult Intelligence Scale-Revised (WAIS-R) assessments. Results: Logistic regression analysis showed the conceptual level responses (CLR) index of the WCST was the most significant index for determining intellectual disability (ID; FIQ ≤ 84). We used the receiver operating characteristic curve to determine the optimum cut-off point of CLR. The optimum one cut-off point of CLR was 66; the two cut-off points were 49 and 66. Comparing the two-stage window screening with the two-stage positive screening, the area under the curve and the positive predictive value increased. Moreover, the cost of the two-stage window screening decreased by 59%. Conclusion: The two-stage window screening is more accurate and economical than the two-stage positive screening. Our results provide an example for the use of two-stage screening and the possibility of the WCST to replace WAIS-R in large-scale screenings for ID in the future. PMID:21197345

  14. Numerical analysis of flow interaction of turbine system in two-stage turbocharger of internal combustion engine

    NASA Astrophysics Data System (ADS)

    Liu, Y. B.; Zhuge, W. L.; Zhang, Y. J.; Zhang, S. Y.

    2016-05-01

    To reach the goal of energy conservation and emission reduction, high intake pressure is needed to meet the demand of high power density and high EGR rate for internal combustion engine. Present power density of diesel engine has reached 90KW/L and intake pressure ratio needed is over 5. Two-stage turbocharging system is an effective way to realize high compression ratio. Because turbocharging system compression work derives from exhaust gas energy. Efficiency of exhaust gas energy influenced by design and matching of turbine system is important to performance of high supercharging engine. Conventional turbine system is assembled by single-stage turbocharger turbines and turbine matching is based on turbine MAP measured on test rig. Flow between turbine system is assumed uniform and value of outlet physical quantities of turbine are regarded as the same as ambient value. However, there are three-dimension flow field distortion and outlet physical quantities value change which will influence performance of turbine system as were demonstrated by some studies. For engine equipped with two-stage turbocharging system, optimization of turbine system design will increase efficiency of exhaust gas energy and thereby increase engine power density. However flow interaction of turbine system will change flow in turbine and influence turbine performance. To recognize the interaction characteristics between high pressure turbine and low pressure turbine, flow in turbine system is modeled and simulated numerically. The calculation results suggested that static pressure field at inlet to low pressure turbine increases back pressure of high pressure turbine, however efficiency of high pressure turbine changes little; distorted velocity field at outlet to high pressure turbine results in swirl at inlet to low pressure turbine. Clockwise swirl results in large negative angle of attack at inlet to rotor which causes flow loss in turbine impeller passages and decreases turbine

  15. Two-stage bulk electron heating in the diffusion region of anti-parallel symmetric reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Ari Yitzchak; Egedal, Jan; Daughton, William Scott

    2016-10-13

    Electron bulk energization in the diffusion region during anti-parallel symmetric reconnection entails two stages. First, the inflowing electrons are adiabatically trapped and energized by an ambipolar parallel electric field. Next, the electrons gain energy from the reconnection electric field as they undergo meandering motion. These collisionless mechanisms have been described previously, and they lead to highly structured electron velocity distributions. Furthermore, a simplified control-volume analysis gives estimates for how the net effective heating scales with the upstream plasma conditions in agreement with fully kinetic simulations and spacecraft observations.

  16. Modeling Relationships Between Flight Crew Demographics and Perceptions of Interval Management

    NASA Technical Reports Server (NTRS)

    Remy, Benjamin; Wilson, Sara R.

    2016-01-01

    The Interval Management Alternative Clearances (IMAC) human-in-the-loop simulation experiment was conducted to assess interval management system performance and participants' acceptability and workload while performing three interval management clearance types. Twenty-four subject pilots and eight subject controllers flew ten high-density arrival scenarios into Denver International Airport during two weeks of data collection. This analysis examined the possible relationships between subject pilot demographics on reported perceptions of interval management in IMAC. Multiple linear regression models were created with a new software tool to predict subject pilot questionnaire item responses from demographic information. General patterns were noted across models that may indicate flight crew demographics influence perceptions of interval management.

  17. An Empirical Comparison of Two-Stage and Pyramidal Adaptive Ability Testing.

    ERIC Educational Resources Information Center

    Larkin, Kevin C.; Weiss, David J.

    A 15-stage pyramidal test and a 40-item two-stage test were constructed and administered by computer to 111 college undergraduates. The two-stage test was found to utilize a smaller proportion of its potential score range than the pyramidal test. Score distributions for both tests were positively skewed but not significantly different from the…

  18. The Two-Word Stage: Motivated by Linguistic or Cognitive Constraints?

    ERIC Educational Resources Information Center

    Berk, Stephanie; Lillo-Martin, Diane

    2012-01-01

    Child development researchers often discuss a "two-word" stage during language acquisition. However, there is still debate over whether the existence of this stage reflects primarily cognitive or linguistic constraints. Analyses of longitudinal data from two Deaf children, Mei and Cal, not exposed to an accessible first language (American Sign…

  19. Effect Sizes and their Intervals: The Two-Level Repeated Measures Case

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2005-01-01

    Probability coverage for eight different confidence intervals (CIs) of measures of effect size (ES) in a two-level repeated measures design was investigated. The CIs and measures of ES differed with regard to whether they used least squares or robust estimates of central tendency and variability, whether the end critical points of the interval…

  20. A two-stage heating scheme for heat assisted magnetic recording

    NASA Astrophysics Data System (ADS)

    Xiong, Shaomin; Kim, Jeongmin; Wang, Yuan; Zhang, Xiang; Bogy, David

    2014-05-01

    Heat Assisted Magnetic Recording (HAMR) has been proposed to extend the storage areal density beyond 1 Tb/in.2 for the next generation magnetic storage. A near field transducer (NFT) is widely used in HAMR systems to locally heat the magnetic disk during the writing process. However, much of the laser power is absorbed around the NFT, which causes overheating of the NFT and reduces its reliability. In this work, a two-stage heating scheme is proposed to reduce the thermal load by separating the NFT heating process into two individual heating stages from an optical waveguide and a NFT, respectively. As the first stage, the optical waveguide is placed in front of the NFT and delivers part of laser energy directly onto the disk surface to heat it up to a peak temperature somewhat lower than the Curie temperature of the magnetic material. Then, the NFT works as the second heating stage to heat a smaller area inside the waveguide heated area further to reach the Curie point. The energy applied to the NFT in the second heating stage is reduced compared with a typical single stage NFT heating system. With this reduced thermal load to the NFT by the two-stage heating scheme, the lifetime of the NFT can be extended orders longer under the cyclic load condition.

  1. Mortality sensitivity in life-stage simulation analysis: A case study of southern sea otters

    USGS Publications Warehouse

    Gerber, L.R.; Tinker, M.T.; Doak, D.F.; Estes, J.A.; Jessup, David A.

    2004-01-01

    Currently, there are no generally recognized approaches for linking detailed mortality and pathology data to population-level analyses of extinction risk. We used a combination of analytical and simulation-based analyses to examine 20 years of age- and sex-specific mortality data for southern sea otters (Enhydra lutris), and we applied results to project the efficacy of alternative conservation strategies. Population recovery of the southern sea otter has been slow (rate of population increase ?? = 1.05) compared to other recovering populations (?? = 1.17-1.20), and the population declined (?? = 0.975) between 1995 and 1999. Age-based Leslie matrices were developed to explore explanations for the slow recovery and recent decline in the southern sea other population. An elasticity analysis was performed to predict effects of proportional changes in stage-specific reproductive or survival rates on the rate of population increase. A life-stage simulation analysis (LSA) was developed to evaluate the impact of changing age- and cause-specific mortality rates on ??. The information used to develop these models was derived from death assemblage, pathology, and live population census data to examine the sensitivity of sea otter population growth to different sources of mortality (e.g., disease and starvation, direct human take [fisheries, gun shot, boat strike, oil pollution], mating trauma and intraspecific aggression, shark bites, and unknown). We used resampling simulations to generate random combinations of vital rates for a large number of matrix replicates and drew on these to estimate potential effects of mortality sources on population growth (??). Our analyses suggest management actions that are likely and unlikely to promote recovery of the southern sea otter and more broadly indicate a methodology to better utilize cause-of-death data in conservation decision-making.

  2. Experimental strength of restorations with fibre posts at different stages, with and without using a simulated ligament.

    PubMed

    Pérez-González, A; González-Lluch, C; Sancho-Bru, J L; Rodríguez-Cervantes, P J; Barjau-Escribano, A; Forner-Navarro, L

    2012-03-01

    The aim of this study was to analyse the strength and failure mode of teeth restored with fibre posts under retention and flexural-compressive loads at different stages of the restoration and to analyse whether including a simulated ligament in the experimental setup has any effect on the strength or the failure mode. Thirty human maxillary central incisors were distributed in three different groups to be restored with simulation of different restoration stages (1: only post, 2: post and core, 3: post-core and crown), using Rebilda fibre posts. The specimens were inserted in resin blocks and loaded by means of a universal testing machine until failure under tension (stage 1) and 50º flexion (stages 2-3). Half the specimens in each group were restored using a simulated ligament between root dentine and resin block and the other half did not use this element. Failure in stage 1 always occurred at the post-dentine interface, with a mean failure load of 191·2 N. Failure in stage 2 was located mainly in the core or coronal dentine (mean failure load of 505·9 N). Failure in stage 3 was observed in the coronal dentine (mean failure load 397·4 N). Failure loads registered were greater than expected masticatory loads. Fracture modes were mostly reparable, thus indicating that this post is clinically valid at the different stages of restoration studied. The inclusion of the simulated ligament in the experimental system did not show a statistically significant effect on the failure load or the failure mode. © 2011 Blackwell Publishing Ltd.

  3. Comparison of the Efficacy and Efficiency of the Use of Virtual Reality Simulation With High-Fidelity Mannequins for Simulation-Based Training of Fiberoptic Bronchoscope Manipulation.

    PubMed

    Jiang, Bailin; Ju, Hui; Zhao, Ying; Yao, Lan; Feng, Yi

    2018-04-01

    This study compared the efficacy and efficiency of virtual reality simulation (VRS) with high-fidelity mannequin in the simulation-based training of fiberoptic bronchoscope manipulation in novices. Forty-six anesthesia residents with no experience in fiberoptic intubation were divided into two groups: VRS (group VRS) and mannequin (group M). After a standard didactic teaching session, group VRS trained 25 times on VRS, whereas group M performed the same process on a mannequin. After training, participants' performance was assessed on a mannequin five consecutive times. Procedure times during training were recorded as pooled data to construct learning curves. Procedure time and global rating scale scores of manipulation ability were compared between groups, as well as changes in participants' confidence after training. Plateaus in the learning curves were achieved after 19 (95% confidence interval = 15-26) practice sessions in group VRS and 24 (95% confidence interval = 20-32) in group M. There was no significant difference in procedure time [13.7 (6.6) vs. 11.9 (4.1) seconds, t' = 1.101, P = 0.278] or global rating scale [3.9 (0.4) vs. 3.8 (0.4), t = 0.791, P = 0.433] between groups. Participants' confidence increased after training [group VRS: 1.8 (0.7) vs. 3.9 (0.8), t = 8.321, P < 0.001; group M = 2.0 (0.7) vs. 4.0 (0.6), t = 13.948, P < 0.001] but did not differ significantly between groups. Virtual reality simulation is more efficient than mannequin in simulation-based training of flexible fiberoptic manipulation in novices, but similar effects can be achieved in both modalities after adequate training.

  4. Simulating compressible-incompressible two-phase flows

    NASA Astrophysics Data System (ADS)

    Denner, Fabian; van Wachem, Berend

    2017-11-01

    Simulating compressible gas-liquid flows, e.g. air-water flows, presents considerable numerical issues and requires substantial computational resources, particularly because of the stiff equation of state for the liquid and the different Mach number regimes. Treating the liquid phase (low Mach number) as incompressible, yet concurrently considering the gas phase (high Mach number) as compressible, can improve the computational performance of such simulations significantly without sacrificing important physical mechanisms. A pressure-based algorithm for the simulation of two-phase flows is presented, in which a compressible and an incompressible fluid are separated by a sharp interface. The algorithm is based on a coupled finite-volume framework, discretised in conservative form, with a compressive VOF method to represent the interface. The bulk phases are coupled via a novel acoustically-conservative interface discretisation method that retains the acoustic properties of the compressible phase and does not require a Riemann solver. Representative test cases are presented to scrutinize the proposed algorithm, including the reflection of acoustic waves at the compressible-incompressible interface, shock-drop interaction and gas-liquid flows with surface tension. Financial support from the EPSRC (Grant EP/M021556/1) is gratefully acknowledged.

  5. Fault detection for discrete-time LPV systems using interval observers

    NASA Astrophysics Data System (ADS)

    Zhang, Zhi-Hui; Yang, Guang-Hong

    2017-10-01

    This paper is concerned with the fault detection (FD) problem for discrete-time linear parameter-varying systems subject to bounded disturbances. A parameter-dependent FD interval observer is designed based on parameter-dependent Lyapunov and slack matrices. The design method is presented by translating the parameter-dependent linear matrix inequalities (LMIs) into finite ones. In contrast to the existing results based on parameter-independent and diagonal Lyapunov matrices, the derived disturbance attenuation, fault sensitivity and nonnegative conditions lead to less conservative LMI characterisations. Furthermore, without the need to design the residual evaluation functions and thresholds, the residual intervals generated by the interval observers are used directly for FD decision. Finally, simulation results are presented for showing the effectiveness and superiority of the proposed method.

  6. Effects of two-stage and total vs. fence-line weaning on the physiology and performance of beef calves

    USDA-ARS?s Scientific Manuscript database

    Calves weaned using a two-stage method where nursing is prevented between cow-calf pairs prior to separation (Stage 1) experience less weaning stress after separation (Stage 2) based on behavior and growth measures. The aim of this study was to document changes in various physiological measures of s...

  7. A comparison of etched-geometry and overgrown silicon permeable base transistors by two-dimensional numerical simulations

    NASA Astrophysics Data System (ADS)

    Vojak, B. A.; Alley, G. D.

    1983-08-01

    Two-dimensional numerical simulations are used to compare etched geometry and overgrown Si permeable base transistors (PTBs), considering both the etched collector and etched emitter biasing conditions made possible by the asymmetry of the etched structure. In PTB devices, the two-dimensional nature of the depletion region near the Schottky contact base grating results in a smaller electron barrier and, therefore, a larger collector current in the etched than in the overgrown structure. The parasitic feedback effects which result at high base-to-emitter bias levels lead to a deviation from the square-law behavior found in the collector characteristics of the overgrown PBT. These structures also have lower device capacitances and smaller transconductances at high base-to-emitter voltages. As a result, overgrown and etched structures have comparable predicted maximum values of the small signal unity short-circuit current gain frequency and maximum oscillation frequency.

  8. An unsupervised two-stage clustering approach for forest structure classification based on X-band InSAR data - A case study in complex temperate forest stands

    NASA Astrophysics Data System (ADS)

    Abdullahi, Sahra; Schardt, Mathias; Pretzsch, Hans

    2017-05-01

    Forest structure at stand level plays a key role for sustainable forest management, since the biodiversity, productivity, growth and stability of the forest can be positively influenced by managing its structural diversity. In contrast to field-based measurements, remote sensing techniques offer a cost-efficient opportunity to collect area-wide information about forest stand structure with high spatial and temporal resolution. Especially Interferometric Synthetic Aperture Radar (InSAR), which facilitates worldwide acquisition of 3d information independent from weather conditions and illumination, is convenient to capture forest stand structure. This study purposes an unsupervised two-stage clustering approach for forest structure classification based on height information derived from interferometric X-band SAR data which was performed in complex temperate forest stands of Traunstein forest (South Germany). In particular, a four dimensional input data set composed of first-order height statistics was non-linearly projected on a two-dimensional Self-Organizing Map, spatially ordered according to similarity (based on the Euclidean distance) in the first stage and classified using the k-means algorithm in the second stage. The study demonstrated that X-band InSAR data exhibits considerable capabilities for forest structure classification. Moreover, the unsupervised classification approach achieved meaningful and reasonable results by means of comparison to aerial imagery and LiDAR data.

  9. Computer-based simulation training to improve learning outcomes in mannequin-based simulation exercises.

    PubMed

    Curtin, Lindsay B; Finn, Laura A; Czosnowski, Quinn A; Whitman, Craig B; Cawley, Michael J

    2011-08-10

    To assess the impact of computer-based simulation on the achievement of student learning outcomes during mannequin-based simulation. Participants were randomly assigned to rapid response teams of 5-6 students and then teams were randomly assigned to either a group that completed either computer-based or mannequin-based simulation cases first. In both simulations, students used their critical thinking skills and selected interventions independent of facilitator input. A predetermined rubric was used to record and assess students' performance in the mannequin-based simulations. Feedback and student performance scores were generated by the software in the computer-based simulations. More of the teams in the group that completed the computer-based simulation before completing the mannequin-based simulation achieved the primary outcome for the exercise, which was survival of the simulated patient (41.2% vs. 5.6%). The majority of students (>90%) recommended the continuation of simulation exercises in the course. Students in both groups felt the computer-based simulation should be completed prior to the mannequin-based simulation. The use of computer-based simulation prior to mannequin-based simulation improved the achievement of learning goals and outcomes. In addition to improving participants' skills, completing the computer-based simulation first may improve participants' confidence during the more real-life setting achieved in the mannequin-based simulation.

  10. Breastfeeding and the risk of childhood asthma: A two-stage instrumental variable analysis to address endogeneity.

    PubMed

    Sharma, Nivita D

    2017-09-01

    Several explanations for the inconsistent results on the effects of breastfeeding on childhood asthma have been suggested. The purpose of this study was to investigate one unexplored explanation, which is the presence of a potential endogenous relationship between breastfeeding and childhood asthma. Endogeneity exists when an explanatory variable is correlated with the error term for reasons such as selection bias, reverse causality, and unmeasured confounders. Unadjusted endogeneity will bias the effect of breastfeeding on childhood asthma. To investigate potential endogeneity, a cross-sectional study of breastfeeding practices and incidence of childhood asthma in 87 pediatric patients in Georgia, the USA, was conducted using generalized linear modeling and a two-stage instrumental variable analysis. First, the relationship between breastfeeding and childhood asthma was analyzed without considering endogeneity. Second, tests for presence of endogeneity were performed and having detected endogeneity between breastfeeding and childhood asthma, a two-stage instrumental variable analysis was performed. The first stage of this analysis estimated the duration of breastfeeding and the second-stage estimated the risk of childhood asthma. When endogeneity was not taken into account, duration of breastfeeding was found to significantly increase the risk of childhood asthma (relative risk ratio [RR]=2.020, 95% confidence interval [CI]: [1.143-3.570]). After adjusting for endogeneity, duration of breastfeeding significantly reduced the risk of childhood asthma (RR=0.003, 95% CI: [0.000-0.240]). The findings suggest that researchers should consider evaluating how the presence of endogeneity could affect the relationship between duration of breastfeeding and the risk of childhood asthma. © 2017 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.

  11. Integrated Circuit Design of 3 Electrode Sensing System Using Two-Stage Operational Amplifier

    NASA Astrophysics Data System (ADS)

    Rani, S.; Abdullah, W. F. H.; Zain, Z. M.; N, Aqmar N. Z.

    2018-03-01

    This paper presents the design of a two-stage operational amplifier(op amp) for 3-electrode sensing system readout circuits. The designs have been simulated using 0.13μm CMOS technology from Silterra (Malaysia) with Mentor graphics tools. The purpose of this projects is mainly to design a miniature interfacing circuit to detect the redox reaction in the form of current using standard analog modules. The potentiostat consists of several op amps combined together in order to analyse the signal coming from the 3-electrode sensing system. This op amp design will be used in potentiostat circuit device and to analyse the functionality for each module of the system.

  12. Accuracy of MHD simulations: Effects of simulation initialization in GUMICS-4

    NASA Astrophysics Data System (ADS)

    Lakka, Antti; Pulkkinen, Tuija; Dimmock, Andrew; Osmane, Adnane; Palmroth, Minna; Honkonen, Ilja

    2016-04-01

    We conducted a study aimed at revealing how different global magnetohydrodynamic (MHD) simulation initialization methods affect the dynamics in different parts of the Earth's magnetosphere-ionosphere system. While such magnetosphere-ionosphere coupling codes have been used for more than two decades, their testing still requires significant work to identify the optimal numerical representation of the physical processes. We used the Grand Unified Magnetosphere-Ionosphere Coupling Simulation (GUMICS-4), the only European global MHD simulation being developed by the Finnish Meteorological Institute. GUMICS-4 was put to a test that included two stages: 1) a 10 day Omni data interval was simulated and the results were validated by comparing both the bow shock and the magnetopause spatial positions predicted by the simulation to actual measurements and 2) the validated 10 day simulation run was used as a reference in a comparison of five 3 + 12 hour (3 hour synthetic initialisation + 12 hour actual simulation) simulation runs. The 12 hour input was not only identical in each simulation case but it also represented a subset of the 10 day input thus enabling quantifying the effects of different synthetic initialisations on the magnetosphere-ionosphere system. The used synthetic initialisation data sets were created using stepwise, linear and sinusoidal functions. Switching the used input from the synthetic to real Omni data was immediate. The results show that the magnetosphere forms in each case within an hour after the switch to real data. However, local dissimilarities are found in the magnetospheric dynamics after formation depending on the used initialisation method. This is evident especially in the inner parts of the lobe.

  13. Two-stage microfluidic chip for selective isolation of circulating tumor cells (CTCs).

    PubMed

    Hyun, Kyung-A; Lee, Tae Yoon; Lee, Su Hyun; Jung, Hyo-Il

    2015-05-15

    Over the past few decades, circulating tumor cells (CTCs) have been studied as a means of overcoming cancer. However, the rarity and heterogeneity of CTCs have been the most significant hurdles in CTC research. Many techniques for CTC isolation have been developed and can be classified into positive enrichment (i.e., specifically isolating target cells using cell size, surface protein expression, and so on) and negative enrichment (i.e., specifically eluting non-target cells). Positive enrichment methods lead to high purity, but could be biased by their selection criteria, while the negative enrichment methods have relatively low purity, but can isolate heterogeneous CTCs. To compensate for the known disadvantages of the positive and negative enrichments, in this study we introduced a two-stage microfluidic chip. The first stage involves a microfluidic magnetic activated cell sorting (μ-MACS) chip to elute white blood cells (WBCs). The second stage involves a geometrically activated surface interaction (GASI) chip for the selective isolation of CTCs. We observed up to 763-fold enrichment in cancer cells spiked into 5 mL of blood sample using the μ-MACS chip at 400 μL/min flow rate. Cancer cells were successfully separated with separation efficiencies ranging from 10.19% to 22.91% based on their EpCAM or HER2 surface protein expression using the GASI chip at a 100 μL/min flow rate. Our two-stage microfluidic chips not only isolated CTCs from blood cells, but also classified heterogeneous CTCs based on their characteristics. Therefore, our chips can contribute to research on CTC heterogeneity of CTCs, and, by extension, personalized cancer treatment. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Study of a two-stage photobase generator for photolithography in microelectronics.

    PubMed

    Turro, Nicholas J; Li, Yongjun; Jockusch, Steffen; Hagiwara, Yuji; Okazaki, Masahiro; Mesch, Ryan A; Schuster, David I; Willson, C Grant

    2013-03-01

    The investigation of the photochemistry of a two-stage photobase generator (PBG) is described. Absorption of a photon by a latent PBG (1) (first step) produces a PBG (2). Irradiation of 2 in the presence of water produces a base (second step). This two-photon sequence (1 + hν → 2 + hν → base) is an important component in the design of photoresists for pitch division technology, a method that doubles the resolution of projection photolithography for the production of microelectronic chips. In the present system, the excitation of 1 results in a Norrish type II intramolecular hydrogen abstraction to generate a 1,4-biradiacal that undergoes cleavage to form 2 and acetophenone (Φ ∼ 0.04). In the second step, excitation of 2 causes cleavage of the oxime ester (Φ = 0.56) followed by base generation after reaction with water.

  15. Relations Among River Stage, Rainfall, Ground-Water Levels, and Stage at Two Missouri River Flood-Plain Wetlands

    USGS Publications Warehouse

    Kelly, Brian P.

    2001-01-01

    The source of water is important to the ecological function of Missouri River flood-plain wetlands. There are four potential sources of water to flood-plain wetlands: direct flow from the river channel during high river stage, ground-water movement into the wetlands in response to river-stage changes and aquifer recharge, direct precipitation, and runoff from surrounding uplands. Concurrent measurements of river stage, rainfall, ground-water level, and wetland stage were compared for two Missouri River flood-plain wetlands located near Rocheport, Missouri, to characterize the spatial and temporal relations between river stage, rainfall, ground-water levels and wetland stage, determine the source of water to each wetland, and compare measured and estimated stage and ground-water levels at each site. The two sites chosen for this study were wetland NC-5, a non-connected, 50 feet deep scour constantly filled with water, formed during the flood of 1993, and wetland TC-1, a shallow, temporary wetland intermittently filled with water. Because these two wetlands bracket a range of wetland types of the Missouri River flood plain, the responses of other Missouri River wetlands to changes in river stage, rainfall, and runoff should be similar to the responses exhibited by wetlands NC-5 and TC-1. For wetlands deep enough to intersect the ground-water table in the alluvial aquifer, such as wetland NC-5, the ground-water response factor can estimate flood-plain wetland stage changes in response to known river-stage changes. Measured maximum stage and ground-water-level changes at NC-5 fall within the range of estimated changes using the ground-water response factor. Measured maximum ground-water-level changes at TC-1 are similar to, but consistently greater than the estimated values, and are most likely the result of alluvial deposits with higher than average hydraulic conductivity located between wetland TC-1 and the Missouri River. Similarity between ground-water level and

  16. Detection of abnormal item based on time intervals for recommender systems.

    PubMed

    Gao, Min; Yuan, Quan; Ling, Bin; Xiong, Qingyu

    2014-01-01

    With the rapid development of e-business, personalized recommendation has become core competence for enterprises to gain profits and improve customer satisfaction. Although collaborative filtering is the most successful approach for building a recommender system, it suffers from "shilling" attacks. In recent years, the research on shilling attacks has been greatly improved. However, the approaches suffer from serious problem in attack model dependency and high computational cost. To solve the problem, an approach for the detection of abnormal item is proposed in this paper. In the paper, two common features of all attack models are analyzed at first. A revised bottom-up discretized approach is then proposed based on time intervals and the features for the detection. The distributions of ratings in different time intervals are compared to detect anomaly based on the calculation of chi square distribution (χ(2)). We evaluated our approach on four types of items which are defined according to the life cycles of these items. The experimental results show that the proposed approach achieves a high detection rate with low computational cost when the number of attack profiles is more than 15. It improves the efficiency in shilling attacks detection by narrowing down the suspicious users.

  17. Robust Speech Enhancement Using Two-Stage Filtered Minima Controlled Recursive Averaging

    NASA Astrophysics Data System (ADS)

    Ghourchian, Negar; Selouani, Sid-Ahmed; O'Shaughnessy, Douglas

    In this paper we propose an algorithm for estimating noise in highly non-stationary noisy environments, which is a challenging problem in speech enhancement. This method is based on minima-controlled recursive averaging (MCRA) whereby an accurate, robust and efficient noise power spectrum estimation is demonstrated. We propose a two-stage technique to prevent the appearance of musical noise after enhancement. This algorithm filters the noisy speech to achieve a robust signal with minimum distortion in the first stage. Subsequently, it estimates the residual noise using MCRA and removes it with spectral subtraction. The proposed Filtered MCRA (FMCRA) performance is evaluated using objective tests on the Aurora database under various noisy environments. These measures indicate the higher output SNR and lower output residual noise and distortion.

  18. Determining diabetic retinopathy screening interval based on time from no retinopathy to laser therapy.

    PubMed

    Hughes, Daniel; Nair, Sunil; Harvey, John N

    2017-12-01

    Objectives To determine the necessary screening interval for retinopathy in diabetic patients with no retinopathy based on time to laser therapy and to assess long-term visual outcome following screening. Methods In a population-based community screening programme in North Wales, 2917 patients were followed until death or for approximately 12 years. At screening, 2493 had no retinopathy; 424 had mostly minor degrees of non-proliferative retinopathy. Data on timing of first laser therapy and visual outcome following screening were obtained from local hospitals and ophthalmology units. Results Survival analysis showed that very few of the no retinopathy at screening group required laser therapy in the early years compared with the non-proliferative retinopathy group ( p < 0.001). After two years, <0.1% of the no retinopathy at screening group required laser therapy, and at three years 0.2% (cumulative), lower rates of treatment than have been suggested by analyses of sight-threatening retinopathy determined photographically. At follow-up (mean 7.8 ± 4.6 years), mild to moderate visual impairment in one or both eyes due to diabetic retinopathy was more common in those with retinopathy at screening (26% vs. 5%, p < 0.001), but blindness due to diabetes occurred in only 1 in 1000. Conclusions Optimum screening intervals should be determined from time to active treatment. Based on requirement for laser therapy, the screening interval for diabetic patients with no retinopathy can be extended to two to three years. Patients who attend for retinal screening and treatment who have no or non-proliferative retinopathy now have a very low risk of eventual blindness from diabetes.

  19. RISMA: A Rule-based Interval State Machine Algorithm for Alerts Generation, Performance Analysis and Monitoring Real-Time Data Processing

    NASA Astrophysics Data System (ADS)

    Laban, Shaban; El-Desouky, Aly

    2013-04-01

    The monitoring of real-time systems is a challenging and complicated process. So, there is a continuous need to improve the monitoring process through the use of new intelligent techniques and algorithms for detecting exceptions, anomalous behaviours and generating the necessary alerts during the workflow monitoring of such systems. The interval-based or period-based theorems have been discussed, analysed, and used by many researches in Artificial Intelligence (AI), philosophy, and linguistics. As explained by Allen, there are 13 relations between any two intervals. Also, there have also been many studies of interval-based temporal reasoning and logics over the past decades. Interval-based theorems can be used for monitoring real-time interval-based data processing. However, increasing the number of processed intervals makes the implementation of such theorems a complex and time consuming process as the relationships between such intervals are increasing exponentially. To overcome the previous problem, this paper presents a Rule-based Interval State Machine Algorithm (RISMA) for processing, monitoring, and analysing the behaviour of interval-based data, received from real-time sensors. The proposed intelligent algorithm uses the Interval State Machine (ISM) approach to model any number of interval-based data into well-defined states as well as inferring them. An interval-based state transition model and methodology are presented to identify the relationships between the different states of the proposed algorithm. By using such model, the unlimited number of relationships between similar large numbers of intervals can be reduced to only 18 direct relationships using the proposed well-defined states. For testing the proposed algorithm, necessary inference rules and code have been designed and applied to the continuous data received in near real-time from the stations of International Monitoring System (IMS) by the International Data Centre (IDC) of the Preparatory

  20. A Two-Stage Kalman Filter Approach for Robust and Real-Time Power System State Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jinghe; Welch, Greg; Bishop, Gary

    2014-04-01

    As electricity demand continues to grow and renewable energy increases its penetration in the power grid, realtime state estimation becomes essential for system monitoring and control. Recent development in phasor technology makes it possible with high-speed time-synchronized data provided by Phasor Measurement Units (PMU). In this paper we present a two-stage Kalman filter approach to estimate the static state of voltage magnitudes and phase angles, as well as the dynamic state of generator rotor angles and speeds. Kalman filters achieve optimal performance only when the system noise characteristics have known statistical properties (zero-mean, Gaussian, and spectrally white). However in practicemore » the process and measurement noise models are usually difficult to obtain. Thus we have developed the Adaptive Kalman Filter with Inflatable Noise Variances (AKF with InNoVa), an algorithm that can efficiently identify and reduce the impact of incorrect system modeling and/or erroneous measurements. In stage one, we estimate the static state from raw PMU measurements using the AKF with InNoVa; then in stage two, the estimated static state is fed into an extended Kalman filter to estimate the dynamic state. Simulations demonstrate its robustness to sudden changes of system dynamics and erroneous measurements.« less

  1. The influence of sampling interval on the accuracy of trail impact assessment

    USGS Publications Warehouse

    Leung, Y.-F.; Marion, J.L.

    1999-01-01

    Trail impact assessment and monitoring (IA&M) programs have been growing in importance and application in recreation resource management at protected areas. Census-based and sampling-based approaches have been developed in such programs, with systematic point sampling being the most common survey design. This paper examines the influence of sampling interval on the accuracy of estimates for selected trail impact problems. A complete census of four impact types on 70 trails in Great Smoky Mountains National Park was utilized as the base data set for the analyses. The census data were resampled at increasing intervals to create a series of simulated point data sets. Estimates of frequency of occurrence and lineal extent for the four impact types were compared with the census data set. The responses of accuracy loss on lineal extent estimates to increasing sampling intervals varied across different impact types, while the responses on frequency of occurrence estimates were consistent, approximating an inverse asymptotic curve. These findings suggest that systematic point sampling may be an appropriate method for estimating the lineal extent but not the frequency of trail impacts. Sample intervals of less than 100 m appear to yield an excellent level of accuracy for the four impact types evaluated. Multiple regression analysis results suggest that appropriate sampling intervals are more likely to be determined by the type of impact in question rather than the length of trail. The census-based trail survey and the resampling-simulation method developed in this study can be a valuable first step in establishing long-term trail IA&M programs, in which an optimal sampling interval range with acceptable accuracy is determined before investing efforts in data collection.

  2. EVALUATION OF A TWO-STAGE PASSIVE TREATMENT APPROACH FOR MINING INFLUENCE WATERS

    EPA Science Inventory

    A two-stage passive treatment approach was assessed at bench-scale using two Colorado Mining Influenced Waters (MIWs). The first-stage was a limestone drain with the purpose of removing iron and aluminum and mitigating the potential effects of mineral acidity. The second stage w...

  3. Waste management with recourse: an inexact dynamic programming model containing fuzzy boundary intervals in objectives and constraints.

    PubMed

    Tan, Q; Huang, G H; Cai, Y P

    2010-09-01

    The existing inexact optimization methods based on interval-parameter linear programming can hardly address problems where coefficients in objective functions are subject to dual uncertainties. In this study, a superiority-inferiority-based inexact fuzzy two-stage mixed-integer linear programming (SI-IFTMILP) model was developed for supporting municipal solid waste management under uncertainty. The developed SI-IFTMILP approach is capable of tackling dual uncertainties presented as fuzzy boundary intervals (FuBIs) in not only constraints, but also objective functions. Uncertainties expressed as a combination of intervals and random variables could also be explicitly reflected. An algorithm with high computational efficiency was provided to solve SI-IFTMILP. SI-IFTMILP was then applied to a long-term waste management case to demonstrate its applicability. Useful interval solutions were obtained. SI-IFTMILP could help generate dynamic facility-expansion and waste-allocation plans, as well as provide corrective actions when anticipated waste management plans are violated. It could also greatly reduce system-violation risk and enhance system robustness through examining two sets of penalties resulting from variations in fuzziness and randomness. Moreover, four possible alternative models were formulated to solve the same problem; solutions from them were then compared with those from SI-IFTMILP. The results indicate that SI-IFTMILP could provide more reliable solutions than the alternatives. 2010 Elsevier Ltd. All rights reserved.

  4. Method and apparatus for removing coarse unentrained char particles from the second stage of a two-stage coal gasifier

    DOEpatents

    Donath, Ernest E.

    1976-01-01

    A method and apparatus for removing oversized, unentrained char particles from a two-stage coal gasification process so as to prevent clogging or plugging of the communicating passage between the two gasification stages. In the first stage of the process, recycled process char passes upwardly while reacting with steam and oxygen to yield a first stage synthesis gas containing hydrogen and oxides of carbon. In the second stage, the synthesis gas passes upwardly with coal and steam which react to yield partially gasified char entrained in a second stage product gas containing methane, hydrogen, and oxides of carbon. Agglomerated char particles, which result from caking coal particles in the second stage and are too heavy to be entrained in the second stage product gas, are removed through an outlet in the bottom of the second stage, the particles being separated from smaller char particles by a counter-current of steam injected into the outlet.

  5. Development of Constraint Force Equation Methodology for Application to Multi-Body Dynamics Including Launch Vehicle Stage Seperation

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Toniolo, Matthew D.; Tartabini, Paul V.; Roithmayr, Carlos M.; Albertson, Cindy W.; Karlgaard, Christopher D.

    2016-01-01

    The objective of this report is to develop and implement a physics based method for analysis and simulation of multi-body dynamics including launch vehicle stage separation. The constraint force equation (CFE) methodology discussed in this report provides such a framework for modeling constraint forces and moments acting at joints when the vehicles are still connected. Several stand-alone test cases involving various types of joints were developed to validate the CFE methodology. The results were compared with ADAMS(Registered Trademark) and Autolev, two different industry standard benchmark codes for multi-body dynamic analysis and simulations. However, these two codes are not designed for aerospace flight trajectory simulations. After this validation exercise, the CFE algorithm was implemented in Program to Optimize Simulated Trajectories II (POST2) to provide a capability to simulate end-to-end trajectories of launch vehicles including stage separation. The POST2/CFE methodology was applied to the STS-1 Space Shuttle solid rocket booster (SRB) separation and Hyper-X Research Vehicle (HXRV) separation from the Pegasus booster as a further test and validation for its application to launch vehicle stage separation problems. Finally, to demonstrate end-to-end simulation capability, POST2/CFE was applied to the ascent, orbit insertion, and booster return of a reusable two-stage-to-orbit (TSTO) vehicle concept. With these validation exercises, POST2/CFE software can be used for performing conceptual level end-to-end simulations, including launch vehicle stage separation, for problems similar to those discussed in this report.

  6. Likelihood-Based Confidence Intervals in Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Oort, Frans J.

    2011-01-01

    In exploratory or unrestricted factor analysis, all factor loadings are free to be estimated. In oblique solutions, the correlations between common factors are free to be estimated as well. The purpose of this article is to show how likelihood-based confidence intervals can be obtained for rotated factor loadings and factor correlations, by…

  7. Reliability of confidence intervals calculated by bootstrap and classical methods using the FIA 1-ha plot design

    Treesearch

    H. T. Schreuder; M. S. Williams

    2000-01-01

    In simulation sampling from forest populations using sample sizes of 20, 40, and 60 plots respectively, confidence intervals based on the bootstrap (accelerated, percentile, and t-distribution based) were calculated and compared with those based on the classical t confidence intervals for mapped populations and subdomains within those populations. A 68.1 ha mapped...

  8. Feasibility Study of SSTO Base Heating Simulation in Pulsed-Type Facilities

    NASA Technical Reports Server (NTRS)

    Park, Chung Sik; Sharma, Surendra; Edwards, Thomas A. (Technical Monitor)

    1995-01-01

    A laboratory simulation of the base heating environment of the proposed reusable Single-Stage-To-Orbit vehicle during its ascent flight was proposed. The rocket engine produces CO2 and H2, which are the main combustible components of the exhaust effluent. The burning of these species, known as afterburning, enhances the base region gas temperature as well as the base heating. To determine the heat flux on the SSTO vehicle, current simulation focuses on the thermochemistry of the afterburning, thermophysical properties of the base region gas, and ensuing radiation from the gas. By extrapolating from the Saturn flight data, the Damkohler number for the afterburning of SSTO vehicle is estimated to be of the order of 10. The limitations on the material strengths limit the laboratory simulation of the flight Damkohler number as well as other flow parameters. A plan is presented in impulse facilities using miniature rocket engines which generate the simulated rocket plume by electric ally-heating a H2/CO2 mixture.

  9. Bootstrap Prediction Intervals in Non-Parametric Regression with Applications to Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Kumar, Sricharan; Srivistava, Ashok N.

    2012-01-01

    Prediction intervals provide a measure of the probable interval in which the outputs of a regression model can be expected to occur. Subsequently, these prediction intervals can be used to determine if the observed output is anomalous or not, conditioned on the input. In this paper, a procedure for determining prediction intervals for outputs of nonparametric regression models using bootstrap methods is proposed. Bootstrap methods allow for a non-parametric approach to computing prediction intervals with no specific assumptions about the sampling distribution of the noise or the data. The asymptotic fidelity of the proposed prediction intervals is theoretically proved. Subsequently, the validity of the bootstrap based prediction intervals is illustrated via simulations. Finally, the bootstrap prediction intervals are applied to the problem of anomaly detection on aviation data.

  10. Two dimensional simulation of patternable conducting polymer electrode based organic thin film transistor

    NASA Astrophysics Data System (ADS)

    Nair, Shiny; Kathiresan, M.; Mukundan, T.

    2018-02-01

    Device characteristics of organic thin film transistor (OTFT) fabricated with conducting polyaniline:polystyrene sulphonic acid (PANi-PSS) electrodes, patterned by the Parylene lift-off method are systematically analyzed by way of two dimensional numerical simulation. The device simulation was performed taking into account field-dependent mobility, low mobility layer at the electrode-semiconductor interface, trap distribution in pentacene film and trapped charge at the organic/insulator interface. The electrical characteristics of bottom contact thin film transistor with PANi-PSS electrodes and pentacene active material is superior to those with palladium electrodes due to a lower charge injection barrier. Contact resistance was extracted in both cases by the transfer line method (TLM). The extracted charge concentration and potential profile from the two dimensional numerical simulation was used to explain the observed electrical characteristics. The simulated device characteristics not only matched the experimental electrical characteristics, but also gave an insight on the charge injection, transport and trap properties of the OTFTs as a function of different electrode materials from the perspectives of transistor operation.

  11. Temporal downscaling of decadal sediment load estimates to a daily interval for use in hindcast simulations

    USGS Publications Warehouse

    Ganju, N.K.; Knowles, N.; Schoellhamer, D.H.

    2008-01-01

    In this study we used hydrologic proxies to develop a daily sediment load time-series, which agrees with decadal sediment load estimates, when integrated. Hindcast simulations of bathymetric change in estuaries require daily sediment loads from major tributary rivers, to capture the episodic delivery of sediment during multi-day freshwater flow pulses. Two independent decadal sediment load estimates are available for the Sacramento/San Joaquin River Delta, California prior to 1959, but they must be downscaled to a daily interval for use in hindcast models. Daily flow and sediment load data to the Delta are available after 1930 and 1959, respectively, but bathymetric change simulations for San Francisco Bay prior to this require a method to generate daily sediment load estimates into the Delta. We used two historical proxies, monthly rainfall and unimpaired flow magnitudes, to generate monthly unimpaired flows to the Sacramento/San Joaquin Delta for the 1851-1929 period. This step generated the shape of the monthly hydrograph. These historical monthly flows were compared to unimpaired monthly flows from the modern era (1967-1987), and a least-squares metric selected a modern water year analogue for each historical water year. The daily hydrograph for the modern analogue was then assigned to the historical year and scaled to match the flow volume estimated by dendrochronology methods, providing the correct total flow for the year. We applied a sediment rating curve to this time-series of daily flows, to generate daily sediment loads for 1851-1958. The rating curve was calibrated with the two independent decadal sediment load estimates, over two distinct periods. This novel technique retained the timing and magnitude of freshwater flows and sediment loads, without damping variability or net sediment loads to San Francisco Bay. The time-series represents the hydraulic mining period with sustained periods of increased sediment loads, and a dramatic decrease after 1910

  12. Universal thermal corrections to single interval entanglement entropy for two dimensional conformal field theories.

    PubMed

    Cardy, John; Herzog, Christopher P

    2014-05-02

    We consider single interval Rényi and entanglement entropies for a two dimensional conformal field theory on a circle at nonzero temperature. Assuming that the finite size of the system introduces a unique ground state with a nonzero mass gap, we calculate the leading corrections to the Rényi and entanglement entropy in a low temperature expansion. These corrections have a universal form for any two dimensional conformal field theory that depends only on the size of the mass gap and its degeneracy. We analyze the limits where the size of the interval becomes small and where it becomes close to the size of the spatial circle.

  13. PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems.

    PubMed

    Ghaffarizadeh, Ahmadreza; Heiland, Randy; Friedman, Samuel H; Mumenthaler, Shannon M; Macklin, Paul

    2018-02-01

    Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal "virtual laboratory" for such multicellular systems simulates both the biochemical microenvironment (the "stage") and many mechanically and biochemically interacting cells (the "players" upon the stage). PhysiCell-physics-based multicellular simulator-is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility "out of the box." The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a "cellular cargo delivery" system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant

  14. A Novel Finite-Sum Inequality-Based Method for Robust H∞ Control of Uncertain Discrete-Time Takagi-Sugeno Fuzzy Systems With Interval-Like Time-Varying Delays.

    PubMed

    Zhang, Xian-Ming; Han, Qing-Long; Ge, Xiaohua

    2017-09-22

    This paper is concerned with the problem of robust H∞ control of an uncertain discrete-time Takagi-Sugeno fuzzy system with an interval-like time-varying delay. A novel finite-sum inequality-based method is proposed to provide a tighter estimation on the forward difference of certain Lyapunov functional, leading to a less conservative result. First, an auxiliary vector function is used to establish two finite-sum inequalities, which can produce tighter bounds for the finite-sum terms appearing in the forward difference of the Lyapunov functional. Second, a matrix-based quadratic convex approach is employed to equivalently convert the original matrix inequality including a quadratic polynomial on the time-varying delay into two boundary matrix inequalities, which delivers a less conservative bounded real lemma (BRL) for the resultant closed-loop system. Third, based on the BRL, a novel sufficient condition on the existence of suitable robust H∞ fuzzy controllers is derived. Finally, two numerical examples and a computer-simulated truck-trailer system are provided to show the effectiveness of the obtained results.

  15. Two technicians apply insulation to S-II second stage

    NASA Technical Reports Server (NTRS)

    1964-01-01

    Two technicians apply insulation to the outer surface of the S-II second stage booster for the Saturn V moon rocket. The towering 363-foot Saturn V was a multi-stage, multi-engine launch vehicle standing taller than the Statue of Liberty. Altogether, the Saturn V engines produced as much power as 85 Hoover Dams.

  16. Removal of cesium from simulated liquid waste with countercurrent two-stage adsorption followed by microfiltration.

    PubMed

    Han, Fei; Zhang, Guang-Hui; Gu, Ping

    2012-07-30

    Copper ferrocyanide (CuFC) was used as an adsorbent to remove cesium. Jar test results showed that the adsorption capacity of CuFC was better than that of potassium zinc hexacyanoferrate. Lab-scale tests were performed by an adsorption-microfiltration process, and the mean decontamination factor (DF) was 463 when the initial cesium concentration was 101.3μg/L, the dosage of CuFC was 40mg/L and the adsorption time was 20min. The cesium concentration in the effluent continuously decreased with the operation time, which indicated that the used adsorbent retained its adsorption capacity. To use this capacity, experiments on a countercurrent two-stage adsorption (CTA)-microfiltration (MF) process were carried out with CuFC adsorption combined with membrane separation. A calculation method for determining the cesium concentration in the effluent was given, and batch tests in a pressure cup were performed to verify the calculated method. The results showed that the experimental values fitted well with the calculated values in the CTA-MF process. The mean DF was 1123 when the dilution factor was 0.4, the initial cesium concentration was 98.75μg/L and the dosage of CuFC and adsorption time were the same as those used in the lab-scale test. The DF obtained by CTA-MF process was more than three times higher than the single-stage adsorption in the jar test. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Two-stage color palettization for error diffusion

    NASA Astrophysics Data System (ADS)

    Mitra, Niloy J.; Gupta, Maya R.

    2002-06-01

    Image-adaptive color palettization chooses a decreased number of colors to represent an image. Palettization is one way to decrease storage and memory requirements for low-end displays. Palettization is generally approached as a clustering problem, where one attempts to find the k palette colors that minimize the average distortion for all the colors in an image. This would be the optimal approach if the image was to be displayed with each pixel quantized to the closest palette color. However, to improve the image quality the palettization may be followed by error diffusion. In this work, we propose a two-stage palettization where the first stage finds some m << k clusters, and the second stage chooses palette points that cover the spread of each of the M clusters. After error diffusion, this method leads to better image quality at less computational cost and with faster display speed than full k-means palettization.

  18. A Two-Stage Microfluidic Device for the Isolation and Capture of Circulating Tumor Cells

    NASA Astrophysics Data System (ADS)

    Cook, Andrew; Belsare, Sayali; Giorgio, Todd; Mu, Richard

    2014-11-01

    Analysis of circulating tumor cells (CTCs) can be critical for studying how tumors grow and metastasize, in addition to personalizing treatment for cancer patients. CTCs are rare events in blood, making it difficult to remove CTCs from the blood stream. Two microfluidic devices have been developed to separate CTCs from blood. The first is a double spiral device that focuses cells into streams, the positions of which are determined by cell diameter. The second device uses ligand-coated magnetic nanoparticles that selectively attach to CTCs. The nanoparticles then pull CTCs out of solution using a magnetic field. These two devices will be combined into a single 2-stage microfluidic device that will capture CTCs more efficiently than either device on its own. The first stage depletes the number of blood cells in the sample by size-based separation. The second stage will magnetically remove CTCs from solution for study and culturing. Thus far, size-based separation has been achieved. Research will also focus on understanding the equations that govern fluid dynamics and magnetic fields in order to determine how the manipulation of microfluidic parameters, such as dimensions and flow rate, will affect integration and optimization of the 2-stage device. NSF-CREST: Center for Physics and Chemistry of Materials. HRD-0420516; Department of Defense, Peer Reviewed Medical Research Program Award W81XWH-13-1-0397.

  19. Nutrients removal from undiluted cattle farm wastewater by the two-stage process of microalgae-based wastewater treatment.

    PubMed

    Lv, Junping; Liu, Yang; Feng, Jia; Liu, Qi; Nan, Fangru; Xie, Shulian

    2018-05-24

    Chlorella vulgaris was selected from five freshwater microalgal strains of Chlorophyta, and showed a good potential in nutrients removal from undiluted cattle farm wastewater. By the end of treatment, 62.30%, 81.16% and 85.29% of chemical oxygen demand (COD), ammonium (NH 4 + -N) and total phosphorus (TP) were removed. Then two two-stage processes were established to enhance nutrients removal efficiency for meeting the discharge standards of China. The process A was the biological treatment via C. vulgaris followed by the biological treatment via C. vulgaris, and the process B was the biological treatment via C. vulgaris followed by the activated carbon adsorption. After 3-5 d of treatment of wastewater via the two processes, the nutrients removal efficiency of COD, NH 4 + -N and TP were 91.24%-92.17%, 83.16%-94.27% and 90.98%-94.41%, respectively. The integrated two-stage process could strengthen nutrients removal efficiency from undiluted cattle farm wastewater with high organic substance and nitrogen concentration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.

    PubMed

    Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R

    2016-04-15

    A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Simulating wildfire spread behavior between two NASA Active Fire data timeframes

    NASA Astrophysics Data System (ADS)

    Adhikari, B.; Hodza, P.; Xu, C.; Minckley, T. A.

    2017-12-01

    Although NASA's Active Fire dataset is considered valuable in mapping the spatial distribution and extent of wildfires across the world, the data is only available at approximately 12-hour time intervals, creating uncertainties and risks associated with fire spread and behavior between the two Visible Infrared Imaging Radiometer Satellite (VIIRS) data collection timeframes. Our study seeks to close the information gap for the United States by using the latest Active Fire data collected for instance around 0130 hours as an ignition source and critical inputs to a wildfire model by uniquely incorporating forecasted and real-time weather conditions for predicting fire perimeter at the next 12 hour reporting time (i.e. around 1330 hours). The model ingests highly dynamic variables such as fuel moisture, temperature, relative humidity, wind among others, and prompts a Monte Carlo simulation exercise that uses a varying range of possible values for evaluating all possible wildfire behaviors. The Monte Carlo simulation implemented in this model provides a measure of the relative wildfire risk levels at various locations based on the number of times those sites are intersected by simulated fire perimeters. Model calibration is achieved using data at next reporting time (i.e. after 12 hours) to enhance the predictive quality at further time steps. While initial results indicate that the calibrated model can predict the overall geometry and direction of wildland fire spread, the model seems to over-predict the sizes of most fire perimeters possibly due to unaccounted fire suppression activities. Nonetheless, the results of this study show great promise in aiding wildland fire tracking, fighting and risk management.

  2. Simulation data for an estimation of the maximum theoretical value and confidence interval for the correlation coefficient.

    PubMed

    Rocco, Paolo; Cilurzo, Francesco; Minghetti, Paola; Vistoli, Giulio; Pedretti, Alessandro

    2017-10-01

    The data presented in this article are related to the article titled "Molecular Dynamics as a tool for in silico screening of skin permeability" (Rocco et al., 2017) [1]. Knowledge of the confidence interval and maximum theoretical value of the correlation coefficient r can prove useful to estimate the reliability of developed predictive models, in particular when there is great variability in compiled experimental datasets. In this Data in Brief article, data from purposely designed numerical simulations are presented to show how much the maximum r value is worsened by increasing the data uncertainty. The corresponding confidence interval of r is determined by using the Fisher r → Z transform.

  3. Advanced Interval Type-2 Fuzzy Sliding Mode Control for Robot Manipulator.

    PubMed

    Hwang, Ji-Hwan; Kang, Young-Chang; Park, Jong-Wook; Kim, Dong W

    2017-01-01

    In this paper, advanced interval type-2 fuzzy sliding mode control (AIT2FSMC) for robot manipulator is proposed. The proposed AIT2FSMC is a combination of interval type-2 fuzzy system and sliding mode control. For resembling a feedback linearization (FL) control law, interval type-2 fuzzy system is designed. For compensating the approximation error between the FL control law and interval type-2 fuzzy system, sliding mode controller is designed, respectively. The tuning algorithms are derived in the sense of Lyapunov stability theorem. Two-link rigid robot manipulator with nonlinearity is used to test and the simulation results are presented to show the effectiveness of the proposed method that can control unknown system well.

  4. Three-Dimensional Unsteady Simulation of Aerodynamics and Heat Transfer in a Modern High Pressure Turbine Stage

    NASA Technical Reports Server (NTRS)

    Shyam, Vikram; Ameri, Ali

    2009-01-01

    Unsteady 3-D RANS simulations have been performed on a highly loaded transonic turbine stage and results are compared to steady calculations as well as to experiment. A low Reynolds number k-epsilon turbulence model is employed to provide closure for the RANS system. A phase-lag boundary condition is used in the tangential direction. This allows the unsteady simulation to be performed by using only one blade from each of the two rows. The objective of this work is to study the effect of unsteadiness on rotor heat transfer and to glean any insight into unsteady flow physics. The role of the stator wake passing on the pressure distribution at the leading edge is also studied. The simulated heat transfer and pressure results agreed favorably with experiment. The time-averaged heat transfer predicted by the unsteady simulation is higher than the heat transfer predicted by the steady simulation everywhere except at the leading edge. The shock structure formed due to stator-rotor interaction was analyzed. Heat transfer and pressure at the hub and casing were also studied. Thermal segregation was observed that leads to the heat transfer patterns predicted by steady and unsteady simulations to be different.

  5. A positive bacterial culture during re-implantation is associated with a poor outcome in two-stage exchange arthroplasty for deep infection.

    PubMed

    Akgün, D; Müller, M; Perka, C; Winkler, T

    2017-11-01

    The aim of this study was to identify the incidence of positive cultures during the second stage of a two-stage revision arthroplasty and to analyse the association between positive cultures and an infection-free outcome. This single-centre retrospective review of prospectively collected data included patients with a periprosthetic joint infection (PJI) of either the hip or the knee between 2013 and 2015, who were treated using a standardised diagnostic and therapeutic algorithm with two-stage exchange. Failure of treatment was assessed according to a definition determined by a Delphi-based consensus. Logistic regression analysis was performed to assess the predictors of positive culture and risk factors for failure. The mean follow-up was 33 months (24 to 48). A total of 163 two-stage revision arthroplasties involving 84 total hip arthroplasties (THAs) and 79 total knee arthroplasties (TKAs) were reviewed. In 27 patients (16.6%), ≥ 1 positive culture was identified at re-implantation and eight (29.6%) of these subsequently failed compared with 20 (14.7%) patients who were culture-negative. The same initially infecting organism was isolated at re-implantation in nine of 27 patients (33.3%). The organism causing re-infection in none of the patients was the same as that isolated at re-implantation. The risk of the failure of treatment was significantly higher in patients with a positive culture (odds ratio (OR) 1.7; 95% confidence interval (CI) 1.0 to 3.0; p = 0.049) and in patients with a higher Charlson Comorbidity Index (OR 1.5; 95% CI 1.6 to 1.8; p = 0.001). Positive culture at re-implantation was independently associated with subsequent failure. Surgeons need to be aware of this association and should consider the medical optimisation of patients with severe comorbidities both before and during treatment. Cite this article: Bone Joint J 2017;99-B:1490-5. ©2017 The British Editorial Society of Bone & Joint Surgery.

  6. Notes on testing equality and interval estimation in Poisson frequency data under a three-treatment three-period crossover trial.

    PubMed

    Lui, Kung-Jong; Chang, Kuang-Chao

    2016-10-01

    When the frequency of event occurrences follows a Poisson distribution, we develop procedures for testing equality of treatments and interval estimators for the ratio of mean frequencies between treatments under a three-treatment three-period crossover design. Using Monte Carlo simulations, we evaluate the performance of these test procedures and interval estimators in various situations. We note that all test procedures developed here can perform well with respect to Type I error even when the number of patients per group is moderate. We further note that the two weighted-least-squares (WLS) test procedures derived here are generally preferable to the other two commonly used test procedures in the contingency table analysis. We also demonstrate that both interval estimators based on the WLS method and interval estimators based on Mantel-Haenszel (MH) approach can perform well, and are essentially of equal precision with respect to the average length. We use a double-blind randomized three-treatment three-period crossover trial comparing salbutamol and salmeterol with a placebo with respect to the number of exacerbations of asthma to illustrate the use of these test procedures and estimators. © The Author(s) 2014.

  7. A Two-Stage Framework for 3D Face Reconstruction from RGBD Images.

    PubMed

    Wang, Kangkan; Wang, Xianwang; Pan, Zhigeng; Liu, Kai

    2014-08-01

    This paper proposes a new approach for 3D face reconstruction with RGBD images from an inexpensive commodity sensor. The challenges we face are: 1) substantial random noise and corruption are present in low-resolution depth maps; and 2) there is high degree of variability in pose and face expression. We develop a novel two-stage algorithm that effectively maps low-quality depth maps to realistic face models. Each stage is targeted toward a certain type of noise. The first stage extracts sparse errors from depth patches through the data-driven local sparse coding, while the second stage smooths noise on the boundaries between patches and reconstructs the global shape by combining local shapes using our template-based surface refinement. Our approach does not require any markers or user interaction. We perform quantitative and qualitative evaluations on both synthetic and real test sets. Experimental results show that the proposed approach is able to produce high-resolution 3D face models with high accuracy, even if inputs are of low quality, and have large variations in viewpoint and face expression.

  8. Effect of action potential duration on Tpeak-Tend interval, T-wave area and T-wave amplitude as indices of dispersion of repolarization: Theoretical and simulation study in the rabbit heart.

    PubMed

    Arteyeva, Natalia V; Azarov, Jan E

    The aim of the study was to differentiate the effect of dispersion of repolarization (DOR) and action potential duration (APD) on T-wave parameters being considered as indices of DOR, namely, Tpeak-Tend interval, T-wave amplitude and T-wave area. T-wave was simulated in a wide physiological range of DOR and APD using a realistic rabbit model based on experimental data. A simplified mathematical formulation of T-wave formation was conducted. Both the simulations and the mathematical formulation showed that Tpeak-Tend interval and T-wave area are linearly proportional to DOR irrespectively of APD range, while T-wave amplitude is non-linearly proportional to DOR and inversely proportional to the minimal repolarization time, or minimal APD value. Tpeak-Tend interval and T-wave area are the most accurate DOR indices independent of APD. T-wave amplitude can be considered as an index of DOR when the level of APD is taken into account. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. A Two-Step Approach to Uncertainty Quantification of Core Simulators

    DOE PAGES

    Yankov, Artem; Collins, Benjamin; Klein, Markus; ...

    2012-01-01

    For the multiple sources of error introduced into the standard computational regime for simulating reactor cores, rigorous uncertainty analysis methods are available primarily to quantify the effects of cross section uncertainties. Two methods for propagating cross section uncertainties through core simulators are the XSUSA statistical approach and the “two-step” method. The XSUSA approach, which is based on the SUSA code package, is fundamentally a stochastic sampling method. Alternatively, the two-step method utilizes generalized perturbation theory in the first step and stochastic sampling in the second step. The consistency of these two methods in quantifying uncertainties in the multiplication factor andmore » in the core power distribution was examined in the framework of phase I-3 of the OECD Uncertainty Analysis in Modeling benchmark. With the Three Mile Island Unit 1 core as a base model for analysis, the XSUSA and two-step methods were applied with certain limitations, and the results were compared to those produced by other stochastic sampling-based codes. Based on the uncertainty analysis results, conclusions were drawn as to the method that is currently more viable for computing uncertainties in burnup and transient calculations.« less

  10. Uncertainty-based simulation-optimization using Gaussian process emulation: Application to coastal groundwater management

    NASA Astrophysics Data System (ADS)

    Rajabi, Mohammad Mahdi; Ketabchi, Hamed

    2017-12-01

    Combined simulation-optimization (S/O) schemes have long been recognized as a valuable tool in coastal groundwater management (CGM). However, previous applications have mostly relied on deterministic seawater intrusion (SWI) simulations. This is a questionable simplification, knowing that SWI models are inevitably prone to epistemic and aleatory uncertainty, and hence a management strategy obtained through S/O without consideration of uncertainty may result in significantly different real-world outcomes than expected. However, two key issues have hindered the use of uncertainty-based S/O schemes in CGM, which are addressed in this paper. The first issue is how to solve the computational challenges resulting from the need to perform massive numbers of simulations. The second issue is how the management problem is formulated in presence of uncertainty. We propose the use of Gaussian process (GP) emulation as a valuable tool in solving the computational challenges of uncertainty-based S/O in CGM. We apply GP emulation to the case study of Kish Island (located in the Persian Gulf) using an uncertainty-based S/O algorithm which relies on continuous ant colony optimization and Monte Carlo simulation. In doing so, we show that GP emulation can provide an acceptable level of accuracy, with no bias and low statistical dispersion, while tremendously reducing the computational time. Moreover, five new formulations for uncertainty-based S/O are presented based on concepts such as energy distances, prediction intervals and probabilities of SWI occurrence. We analyze the proposed formulations with respect to their resulting optimized solutions, the sensitivity of the solutions to the intended reliability levels, and the variations resulting from repeated optimization runs.

  11. Thiol-vinyl systems as shape memory polymers and novel two-stage reactive polymer systems

    NASA Astrophysics Data System (ADS)

    Nair, Devatha P.

    2011-12-01

    The focus of this research was to formulate, characterize and tailor the reaction methodologies and material properties of thiol-vinyl systems to develop novel polymer platforms for a range of engineering applications. Thiol-ene photopolymers were demonstrated to exhibit several advantageous characteristics for shape memory polymer systems for a range of biomedical applications. The thiol-ene shape memory polymer systems were tough and flexible as compared to the acrylic control systems with glass transition temperatures between 30 and 40 °C; ideal for actuation at body temperature. The thiol-ene polymers also exhibited excellent shape fixity and a rapid and distinct shape memory actuation response along with free strain recoveries of greater than 96% and constrained stress recoveries of 100%. Additionally, two-stage reactive thiol-acrylate systems were engineered as a polymer platform technology enabling two independent sets of polymer processing and material properties. There are distinct advantages to designing polymer systems that afford two distinct sets of material properties -- an intermediate polymer that would enable optimum handling and processing of the material (stage 1), while maintaining the ability to tune in different, final properties that enable the optimal functioning of the polymeric material (stage 2). To demonstrate the range of applicability of the two-stage reactive systems, three specific applications were demonstrated; shape memory polymers, lithographic impression materials, and optical materials. The thiol-acrylate reactions exhibit a wide range of application versatility due to the range of available thiol and acrylate monomers as well as reaction mechanisms such as Michael Addition reactions and free radical polymerizations. By designing a series of non-stoichiometeric thiol-acrylate systems, a polymer network is initially formed via a base catalyzed 'click' Michael addition reaction. This self-limiting reaction results in a Stage 1

  12. Gas pollutants removal in a single- and two-stage ejector-venturi scrubber.

    PubMed

    Gamisans, Xavier; Sarrà, Montserrrat; Lafuente, F Javier

    2002-03-29

    The absorption of SO(2) and NH(3) from the flue gas into NaOH and H(2)SO(4) solutions, respectively has been studied using an industrial scale ejector-venturi scrubber. A statistical methodology is presented to characterise the performance of the scrubber by varying several factors such as gas pollutant concentration, air flowrate and absorbing solution flowrate. Some types of venturi tube constructions were assessed, including the use of a two-stage venturi tube. The results showed a strong influence of the liquid scrubbing flowrate on pollutant removal efficiency. The initial pollutant concentration and the gas flowrate had a slight influence. The use of a two-stage venturi tube considerably improved the absorption efficiency, although it increased energy consumption. The results of this study will be applicable to the optimal design of venturi-based absorbers for gaseous pollution control or chemical reactors.

  13. [Modified two-stage surgery for total auriculoplasty with autogenous rib cartilage].

    PubMed

    Zhang, Zheng-wen; Kang, Shen-song; Xie, Feng; Ma, Teng-xiao; Li, Lei; Zhai, Hong-feng; Chou, Hai-yan; Li, Hao; Zhong, Ai-mei; Zhang, Dong-yi

    2011-09-01

    To introduce a modified surgery for total auriculoplasty and the experience in one hundred and forty-six cases (155 ears). The procedure was a two-stage operation. The first stage involved fabrication and grafting of a costal cartilage framework. A U-shaped skin incision was made on the posterior edge of the lobule and the remnant ear cartilage was removed completely. The area for the insertion of the cartilage framework was undermined. Skin flaps were sutured after insertion of the cartilage framework. The second-stage surgery was usually performed six months after the first-stage operation. The reconstructed auricle was elevated, and a costal cartilage block was fixed to the posterior part of the auricle. A temporoparietal fascia flap was then used to cover the costal cartilage block. Finally, the posterior aspect of the projected auricle was covered with a spit-thickness skin graft. The incisions healed in one hundred and forty-one patients (150 ears) after the first stage operation. Partial necrosis of the postauricular flap was observed in five cases (5 ears) after the first stage operation, but no exposure or absorption of the cartilage took place. The skin grafts survived in one hundred and thirty-nine cases (147 ears) after the second-stage surgery. Partial necrosis of the skin graft was observed in seven cases (8 ears), but healed after one-week of dressing changes. Ninety-four cases (97 ears) were followed up, but fifty-two cases (58 ears) were lost to follow up. The follow-up at six months to two years showed satisfactory contour and projection of the constructed ears. This two-stage surgery is simple and ideal for auricloplasty with few complications.

  14. Evaluation of Two Unique Side Stick Controllers in a Fixed-Base Flight Simulator

    NASA Technical Reports Server (NTRS)

    Mayer, Jann; Cox, Timothy H.

    2003-01-01

    A handling qualities analysis has been performed on two unique side stick controllers in a fixed-base F-18 flight simulator. Each stick, which uses a larger range of motion than is common for similar controllers, has a moving elbow cup that accommodates movement of the entire arm for control. The sticks are compared to the standard center stick in several typical fighter aircraft tasks. Several trends are visible in the time histories, pilot ratings, and pilot comments. The aggressive pilots preferred the center stick, because the side sticks are underdamped, causing overshoots and oscillations when large motions are executed. The less aggressive pilots preferred the side sticks, because of the smooth motion and low breakout forces. The aggressive pilots collectively gave the worst ratings, probably because of increased sensitivity of the simulator (compared to the actual F-18 aircraft), which can cause pilot-induced oscillations when aggressive inputs are made. Overall, the elbow cup is not a positive feature, because using the entire arm for control inhibits precision. Pilots had difficulty measuring their performance, particularly during the offset landing task, and tended to overestimate.

  15. Evaluation of Disaster Preparedness Based on Simulation Exercises: A Comparison of Two Models.

    PubMed

    Rüter, Andres; Kurland, Lisa; Gryth, Dan; Murphy, Jason; Rådestad, Monica; Djalali, Ahmadreza

    2016-08-01

    The objective of this study was to highlight 2 models, the Hospital Incident Command System (HICS) and the Disaster Management Indicator model (DiMI), for evaluating the in-hospital management of a disaster situation through simulation exercises. Two disaster exercises, A and B, with similar scenarios were performed. Both exercises were evaluated with regard to actions, processes, and structures. After the exercises, the results were calculated and compared. In exercise A the HICS model indicated that 32% of the required positions for the immediate phase were taken under consideration with an average performance of 70%. For exercise B, the corresponding scores were 42% and 68%, respectively. According to the DiMI model, the results for exercise A were a score of 68% for management processes and 63% for management structure (staff skills). In B the results were 77% and 86%, respectively. Both models demonstrated acceptable results in relation to previous studies. More research in this area is needed to validate which of these methods best evaluates disaster preparedness based on simulation exercises or whether the methods are complementary and should therefore be used together. (Disaster Med Public Health Preparedness. 2016;10:544-548).

  16. CFD Approaches for Simulation of Wing-Body Stage Separation

    NASA Technical Reports Server (NTRS)

    Buning, Pieter G.; Gomez, Reynaldo J.; Scallion, William I.

    2004-01-01

    A collection of computational fluid dynamics tools and techniques are being developed and tested for application to stage separation and abort simulation for next-generation launch vehicles. In this work, an overset grid Navier-Stokes flow solver has been enhanced and demonstrated on a matrix of proximity cases and on a dynamic separation simulation of a belly-to-belly wing-body configuration. Steady cases show excellent agreement between Navier-Stokes results, Cartesian grid Euler solutions, and wind tunnel data at Mach 3. Good agreement has been obtained between Navier-Stokes, Euler, and wind tunnel results at Mach 6. An analysis of a dynamic separation at Mach 3 demonstrates that unsteady aerodynamic effects are not important for this scenario. Results provide an illustration of the relative applicability of Euler and Navier-Stokes methods to these types of problems.

  17. Evaluation of metal ions and surfactants effect on cell growth and exopolysaccharide production in two-stage submerged culture of Cordyceps militaris.

    PubMed

    Cui, Jian-Dong; Zhang, Ya-Nan

    2012-11-01

    During the two-stage submerged fermentation of medicinal mushroom Cordyceps militaris, it was found that K(+), Ca(2+), Mg(2+), and Mn(2+) were favorable to the mycelial growth. The EPS production reached the highest levels in the media containing Mg(2+) and Mn(2+). However, Ca(2+) and K(+) almost failed to increase significantly exopolysaccharides (EPS) production. Sodium dodecyl sulfate (SDS) significantly enhanced EPS production compared with that of without adding SDS when SDS was added on static culture stage of two-stage cultivation process. The presence of Tween 80 in the medium not only simulated mycelial growth but also increased EPS production. By response surface methods (RSM), EPS production reached its peak value of 3.28 g/L under optimal combination of 27.6 mM Mg(2+), 11.1 mM Mn(2+), and 0.05 mM SDS, which was 3.76-fold compared with that of without metal ion and surfactant. The results obtained were useful in better understanding the regulation for efficient production of EPS of C. militaris in the two-stage submerged culture.

  18. A minimally sufficient model for rib proximal-distal patterning based on genetic analysis and agent-based simulations

    PubMed Central

    Mah, In Kyoung

    2017-01-01

    For decades, the mechanism of skeletal patterning along a proximal-distal axis has been an area of intense inquiry. Here, we examine the development of the ribs, simple structures that in most terrestrial vertebrates consist of two skeletal elements—a proximal bone and a distal cartilage portion. While the ribs have been shown to arise from the somites, little is known about how the two segments are specified. During our examination of genetically modified mice, we discovered a series of progressively worsening phenotypes that could not be easily explained. Here, we combine genetic analysis of rib development with agent-based simulations to conclude that proximal-distal patterning and outgrowth could occur based on simple rules. In our model, specification occurs during somite stages due to varying Hedgehog protein levels, while later expansion refines the pattern. This framework is broadly applicable for understanding the mechanisms of skeletal patterning along a proximal-distal axis. PMID:29068314

  19. An adaptive two-stage sequential design for sampling rare and clustered populations

    USGS Publications Warehouse

    Brown, J.A.; Salehi, M.M.; Moradi, M.; Bell, G.; Smith, D.R.

    2008-01-01

    How to design an efficient large-area survey continues to be an interesting question for ecologists. In sampling large areas, as is common in environmental studies, adaptive sampling can be efficient because it ensures survey effort is targeted to subareas of high interest. In two-stage sampling, higher density primary sample units are usually of more interest than lower density primary units when populations are rare and clustered. Two-stage sequential sampling has been suggested as a method for allocating second stage sample effort among primary units. Here, we suggest a modification: adaptive two-stage sequential sampling. In this method, the adaptive part of the allocation process means the design is more flexible in how much extra effort can be directed to higher-abundance primary units. We discuss how best to design an adaptive two-stage sequential sample. ?? 2008 The Society of Population Ecology and Springer.

  20. The prelaying interval of emperor geese on the Yukon-Kuskokwim Delta, Alaska

    USGS Publications Warehouse

    Hupp, Jerry W.; Schmutz, J.A.; Ely, Craig R.

    2006-01-01

    We marked 136 female Emperor Geese (Chen canagica) in western Alaska with VHF or satellite (PTT) transmitters from 1999 to 2003 to monitor their spring arrival and nest initiation dates on the Yukon Delta, and to estimate prelaying interval lengths once at the nesting area. Ninety-two females with functional transmitters returned to the Yukon Delta in the spring after they were marked, and we located the nests of 35 of these individuals. Prelaying intervals were influenced by when snow melted in the spring and individual arrival dates on the Yukon Delta. The median prelaying interval was 15 days (range = 12-19 days) in a year when snow melted relatively late, and 11 days (range = 4-16 days) in two warmer years when snow melted earlier. In years when snow melted earlier, prelaying intervals of <12 days for 11 of 15 females suggested they initiated rapid follicle development on spring staging areas. The prelaying interval declined by approximately 0.4 days and nest initiation date increased approximately 0.5 days for each day a female delayed her arrival. Thus, females that arrived first on the Yukon Delta had prelaying intervals up to four days longer, yet they nested up to five days earlier, than females that arrived last. The proximity of spring staging areas on the Alaska Peninsula to nesting areas on the Yukon Delta may enable Emperor Geese to alter timing of follicle development depending on annual conditions, and to invest nutrients acquired from both areas in eggs during their formation. Plasticity in timing of follicle development is likely advantageous in a variable environment where melting of snow cover in the spring can vary by 2-3 weeks annually. ?? The Cooper Ornithological Society 2006.

  1. Probing the mechanism of fusion in a two-dimensional computer simulation.

    PubMed Central

    Chanturiya, Alexandr; Scaria, Puthurapamil; Kuksenok, Oleksandr; Woodle, Martin C

    2002-01-01

    A two-dimensional (2D) model of lipid bilayers was developed and used to investigate a possible role of membrane lateral tension in membrane fusion. We found that an increase of lateral tension in contacting monolayers of 2D analogs of liposomes and planar membranes could cause not only hemifusion, but also complete fusion when internal pressure is introduced in the model. With a certain set of model parameters it was possible to induce hemifusion-like structural changes by a tension increase in only one of the two contacting bilayers. The effect of lysolipids was modeled as an insertion of a small number of extra molecules into the cis or trans side of the interacting bilayers at different stages of simulation. It was found that cis insertion arrests fusion and trans insertion has no inhibitory effect on fusion. The possibility of protein participation in tension-driven fusion was tested in simulation, with one of two model liposomes containing a number of structures capable of reducing the area occupied by them in the outer monolayer. It was found that condensation of these structures was sufficient to produce membrane reorganization similar to that observed in simulations with "protein-free" bilayers. These data support the hypothesis that changes in membrane lateral tension may be responsible for fusion in both model phospholipid membranes and in biological protein-mediated fusion. PMID:12023230

  2. Defining the Ideal Time Interval Between Planned Induction Therapy and Surgery for Stage IIIA Non-Small Cell Lung Cancer.

    PubMed

    Samson, Pamela; Crabtree, Traves D; Robinson, Cliff G; Morgensztern, Daniel; Broderick, Stephen; Krupnick, A Sasha; Kreisel, Daniel; Patterson, G Alexander; Meyers, Bryan; Puri, Varun

    2017-04-01

    Induction therapy leads to significant improvement in survival for selected patients with stage IIIA non-small cell lung cancer. The ideal time interval between induction therapy and surgery remains unknown. Clinical stage IIIA non-small cell lung cancer patients receiving induction therapy and surgery were identified in the National Cancer Database. Delayed surgery was defined as greater than or equal to 3 months after starting induction therapy. A logistic regression model identified variables associated with delayed surgery. Cox proportional hazards modeling and Kaplan-Meier analysis were performed to evaluate variables independently associated with overall survival. From 2006 to 2010, 1,529 of 2,380 (64.2%) received delayed surgery. Delayed surgery patients were older (61.2 ± 10.0 years versus 60.3 ± 9.2; p = 0.03), more likely to be non-white (12.4% versus 9.7%; p = 0.046), and less likely to have private insurance (50% versus 58.2%; p = 0.002). Delayed surgery patients were also more likely to have a sublobar resection (6.3% versus 2.9%). On multivariate analysis, age greater than 68 years (odds ratio [OR], 1.37; 95% confidence interval [CI], 1.1 to 1.7) was associated with delayed surgery, whereas white race (OR, 0.75; 95% CI, 0.57 to 0.99) and private insurance status (OR, 0.82; 95% CI, 0.68 to 0.99) were associated with early surgery. Delayed surgery was associated with higher risk of long-term mortality (hazard ratio, 1.25; 95% CI, 1.07 to 1.47). Delayed surgery after induction therapy for stage IIIA lung cancer is associated with shorter survival, and is influenced by both social and physiologic factors. Prospective work is needed to further characterize the relationship between patient comorbidities and functional status with receipt of timely surgery. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  3. A two-stage linear discriminant analysis via QR-decomposition.

    PubMed

    Ye, Jieping; Li, Qi

    2005-06-01

    Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.

  4. The effect of two different interval-training programmes on physiological and performance indices.

    PubMed

    Sindiani, Mahmood; Eliakim, Alon; Segev, Daria; Meckel, Yoav

    2017-08-01

    The aim of the present study was to compare the effect of an increasing-distance, interval-training programme and a decreasing-distance, interval-training programme, matched for total distance, on aerobic and anaerobic physiological indices. Forty physical education students were randomly assigned to either the increasing- or decreasing-distance, interval-training group (ITG and DTG), and completed two similar relevant sets of tests before and after six weeks of training. One training programme consisted of increasing-distance interval-training (100-200-300-400-500 m) and the other decreasing-distance interval training (500-400-300-200-100 m). While both training programmes led to a significant improvement in VO 2 max (ES = 0.83-1.25), the improvement in the DTG was significantly greater than in the ITG (14.5 ± 3.6 vs. 7.8 ± 3.2%, p < .05). In addition, while both training programmes led to a significant improvement in all anaerobic indices (ES = 0.83-1.63), the improvements in peak power (15.7 ± 7.8 vs. 8.9 ± 4.7), mean power (10.6 ± 5.4 vs. 6.8 ± 4.4), and fatigue index (18.2 ± 10.9 vs. 7.0 ± 14.2) were significantly greater in the DTG compared to the ITG (p < .05). The main finding of the present study was that beyond the significant positive effects of both training programmes on aerobic and anaerobic fitness, the DTG showed significant superiority over the ITG in improving aerobic and anaerobic performance capabilities. Coaches and athletes should therefore be aware that, in spite of identical total work, an interval-training programme might induce different physiological impacts if the order of intervals is not identical.

  5. Enhancing learning through optimal sequencing of web-based and manikin simulators to teach shock physiology in the medical curriculum.

    PubMed

    Cendan, Juan C; Johnson, Teresa R

    2011-12-01

    The Association of American Medical Colleges has encouraged educators to investigate proper linkage of simulation experiences with medical curricula. The authors aimed to determine if student knowledge and satisfaction differ between participation in web-based and manikin simulations for learning shock physiology and treatment and to determine if a specific training sequencing had a differential effect on learning. All 40 second-year medical students participated in a randomized, counterbalanced study with two interventions: group 1 (n = 20) participated in a web-based simulation followed by a manikin simulation and group 2 (n = 20) participated in reverse order. Knowledge and attitudes were documented. Mixed-model ANOVA indicated a significant main effect of time (F(1,38) = 18.6, P < 0.001, η(p)(2) = 0.33). Group 1 scored significantly higher on quiz 2 (81.5%) than on quiz 1 (74.3%, t(19) = 3.9, P = 0.001), for an observed difference of 7.2% (95% confidence interval: 3.3, 11.0). Mean quiz scores of group 2 did not differ significantly (quiz 1: 77.0% and quiz 2: 79.7%). There was no significant main effect of group or a group by time interaction effect. Students rated the simulations as equally effective in teaching shock physiology (P = 0.88); however, the manikin simulation was regarded as more effective in teaching shock treatment (P < 0.001). Most students (73.7%) preferred the manikin simulation. The two simulations may be of similar efficacy for educating students on the physiology of shock; however, the data suggest improved learning when web-based simulation precedes manikin use. This finding warrants further study.

  6. Two-stage electrostatic precipitator using induction charging

    NASA Astrophysics Data System (ADS)

    Takashima, Kazunori; Kohno, Hiromu; Katatani, Atsushi; Kurita, Hirofumi; Mizuno, Akira

    2018-05-01

    An electrostatic precipitator (ESP) without using corona discharge was investigated herein. The ESP employed a two-stage configuration, consisting of an induction charging-based particle charger and a parallel plate type particle collector. By applying a high voltage of several kV, under which no corona discharge was generated in the charger, particles were charged by induction due to contact with charger electrodes. The amount of charge on the charged particles increased with the applied voltage and turbulent air flow in the charger. Performance of the ESP equipped with the induction charger was investigated using ambient air. The removal efficiency for particles ranging 0.3 µm to 5 µm in diameter increased with applied voltage and turbulence intensity of gas flow in the charger when the applied voltage was sufficiently low not to generate corona discharge. This suggests that induction charging can be used for electrostatic precipitation, which can reduce ozone generation and power consumption significantly.

  7. Two stage treatment of dairy effluent using immobilized Chlorella pyrenoidosa

    PubMed Central

    2013-01-01

    Background Dairy effluents contains high organic load and unscrupulous discharge of these effluents into aquatic bodies is a matter of serious concern besides deteriorating their water quality. Whilst physico-chemical treatment is the common mode of treatment, immobilized microalgae can be potentially employed to treat high organic content which offer numerous benefits along with waste water treatment. Methods A novel low cost two stage treatment was employed for the complete treatment of dairy effluent. The first stage consists of treating the diary effluent in a photobioreactor (1 L) using immobilized Chlorella pyrenoidosa while the second stage involves a two column sand bed filtration technique. Results Whilst NH4+-N was completely removed, a 98% removal of PO43--P was achieved within 96 h of two stage purification processes. The filtrate was tested for toxicity and no mortality was observed in the zebra fish which was used as a model at the end of 96 h bioassay. Moreover, a significant decrease in biological oxygen demand and chemical oxygen demand was achieved by this novel method. Also the biomass separated was tested as a biofertilizer to the rice seeds and a 30% increase in terms of length of root and shoot was observed after the addition of biomass to the rice plants. Conclusions We conclude that the two stage treatment of dairy effluent is highly effective in removal of BOD and COD besides nutrients like nitrates and phosphates. The treatment also helps in discharging treated waste water safely into the receiving water bodies since it is non toxic for aquatic life. Further, the algal biomass separated after first stage of treatment was highly capable of increasing the growth of rice plants because of nitrogen fixation ability of the green alga and offers a great potential as a biofertilizer. PMID:24355316

  8. Development of repair mechanism of FSX-414 based 1st stage nozzle of gas turbine

    NASA Astrophysics Data System (ADS)

    Rahman, Md. Tawfiqur

    2017-06-01

    This paper describes the failure mechanism and repair technology of 1st stage nozzle or vane of industrial gas turbine which is made of cobalt based super alloy FSX-414. 1st stage nozzles or vanes are important stationery components of gas turbine based power plant. Those are the parts of hot gas path components of gas turbine and their manufacturing process is casting. At present, it is widely accepted that gas turbine based combined cycle power plant is the most efficient and cost effective solution to generate electricity. One of the factors of high efficiency of this type of gas turbine is the increase of its turbine inlet temperature. As an effect of this factor and in conjunction with some other factors, the 1st stage nozzle of gas turbine operates under extremely high temperature and thermal stresses. As a result, the design lifetime of these components becomes limited. Furthermore, attention on nozzles or vanes is required in order to achieve their design lifetime. However, due to unfriendly operational condition and environmental effect, anytime failure can occur on these heat resistant alloy based components which may lead to severe damage of gas turbine. To mitigate these adverse effects, schedule maintenance is performed on a predetermined time interval of hot gas path components of gas turbine based power plant. This paper addresses common failures in gas turbine's 1st stage nozzles or vanes. Usually these are repaired by using ADH process but for several reasons ADH process is not used here. Hence the challenging task is performed using gas tungsten arc welding which is presented in this article systematically.

  9. Two-dimensional Lagrangian simulation of suspended sediment

    USGS Publications Warehouse

    Schoellhamer, David H.

    1988-01-01

    A two-dimensional laterally averaged model for suspended sediment transport in steady gradually varied flow that is based on the Lagrangian reference frame is presented. The layered Lagrangian transport model (LLTM) for suspended sediment performs laterally averaged concentration. The elevations of nearly horizontal streamlines and the simulation time step are selected to optimize model stability and efficiency. The computational elements are parcels of water that are moved along the streamlines in the Lagrangian sense and are mixed with neighboring parcels. Three applications show that the LLTM can accurately simulate theoretical and empirical nonequilibrium suspended sediment distributions and slug injections of suspended sediment in a laboratory flume.

  10. Two stages of directed forgetting: Electrophysiological evidence from a short-term memory task.

    PubMed

    Gao, Heming; Cao, Bihua; Qi, Mingming; Wang, Jing; Zhang, Qi; Li, Fuhong

    2016-06-01

    In this study, a short-term memory test was used to investigate the temporal course and neural mechanism of directed forgetting under different memory loads. Within each trial, two memory items with high or low load were presented sequentially, followed by a cue indicating whether the presented items should be remembered. After an interval, subjects were asked to respond to the probe stimuli. The ERPs locked to the cues showed that (a) the effect of cue type was initially observed during the P2 (160-240 ms) time window, with more positive ERPs for remembering relative to forgetting cues; (b) load effects were observed during the N2-P3 (250-500 ms) time window, with more positive ERPs for the high-load than low-load condition; (c) the cue effect was also observed during the N2-P3 time window, with more negative ERPs for forgetting versus remembering cues. These results demonstrated that directed forgetting involves two stages: task-relevance identification and information discarding. The cue effects during the N2 epoch supported the view that directed forgetting is an active process. © 2016 Society for Psychophysiological Research.

  11. Modeling and simulating two cut-to-length harvesting systems in central Appalachian hardwoods

    Treesearch

    Jingxin Wang; Chris B. LeDoux; Yaoxiang Li

    2003-01-01

    The production rates and costs of two cut-to-length harvesting systems was simulated using a modular ground-based simulation model and stand yield data from fully stocked, second growth even aged central Appalachian hardwood forests. The two harvesters simulated were a modified John Deere 988 tracked excavator with a model RP 1600 single grip sawhead and an excavator...

  12. Uranium isotopes distinguish two geochemically distinct stages during the later Cambrian SPICE event

    PubMed Central

    Dahl, Tais W.; Boyle, Richard A.; Canfield, Donald E.; Connelly, James N.; Gill, Benjamin C.; Lenton, Timothy M.; Bizzarro, Martin

    2015-01-01

    Anoxic marine zones were common in early Paleozoic oceans (542–400 Ma), and present a potential link to atmospheric pO2 via feedbacks linking global marine phosphorous recycling, primary production and organic carbon burial. Uranium (U) isotopes in carbonate rocks track the extent of ocean anoxia, whereas carbon (C) and sulfur (S) isotopes track the burial of organic carbon and pyrite sulfur (primary long-term sources of atmospheric oxygen). In combination, these proxies therefore reveal the comparative dynamics of ocean anoxia and oxygen liberation to the atmosphere over million-year time scales. Here we report high-precision uranium isotopic data in marine carbonates deposited during the Late Cambrian ‘SPICE’ event, at ca. 499 Ma, documenting a well-defined −0.18‰ negative δ238U excursion that occurs at the onset of the SPICE event’s positive δ13C and δ34S excursions, but peaks (and tails off) before them. Dynamic modelling shows that the different response of the U reservoir cannot be attributed solely to differences in residence times or reservoir sizes - suggesting that two chemically distinct ocean states occurred within the SPICE event. The first ocean stage involved a global expansion of euxinic waters, triggering the spike in U burial, and peaking in conjunction with a well-known trilobite extinction event. During the second stage widespread euxinia waned, causing U removal to tail off, but enhanced organic carbon and pyrite burial continued, coinciding with evidence for severe sulfate depletion in the oceans (Gill et al., 2011). We discuss scenarios for how an interval of elevated pyrite and organic carbon burial could have been sustained without widespread euxinia in the water column (both non-sulfidic anoxia and/or a more oxygenated ocean state are possibilities). Either way, the SPICE event encompasses two different stages of elevated organic carbon and pyrite burial maintained by high nutrient fluxes to the ocean, and potentially

  13. Two-stage perceptual learning to break visual crowding.

    PubMed

    Zhu, Ziyun; Fan, Zhenzhi; Fang, Fang

    2016-01-01

    When a target is presented with nearby flankers in the peripheral visual field, it becomes harder to identify, which is referred to as crowding. Crowding sets a fundamental limit of object recognition in peripheral vision, preventing us from fully appreciating cluttered visual scenes. We trained adult human subjects on a crowded orientation discrimination task and investigated whether crowding could be completely eliminated by training. We discovered a two-stage learning process with this training task. In the early stage, when the target and flankers were separated beyond a certain distance, subjects acquired a relatively general ability to break crowding, as evidenced by the fact that the breaking of crowding could transfer to another crowded orientation, even a crowded motion stimulus, although the transfer to the opposite visual hemi-field was weak. In the late stage, like many classical perceptual learning effects, subjects' performance gradually improved and showed specificity to the trained orientation. We also found that, when the target and flankers were spaced too finely, training could only reduce, rather than completely eliminate, the crowding effect. This two-stage learning process illustrates a learning strategy for our brain to deal with the notoriously difficult problem of identifying peripheral objects in clutter. The brain first learned to solve the "easy and general" part of the problem (i.e., improving the processing resolution and segmenting the target and flankers) and then tackle the "difficult and specific" part (i.e., refining the representation of the target).

  14. Catalytic two-stage coal hydrogenation and hydroconversion process

    DOEpatents

    MacArthur, James B.; McLean, Joseph B.; Comolli, Alfred G.

    1989-01-01

    A process for two-stage catalytic hydrogenation and liquefaction of coal to produce increased yields of low-boiling hydrocarbon liquid and gas products. In the process, the particulate coal is slurried with a process-derived liquid solvent and fed at temperature below about 650.degree. F. into a first stage catalytic reaction zone operated at conditions which promote controlled rate liquefaction of the coal, while simultaneously hydrogenating the hydrocarbon recycle oils at conditions favoring hydrogenation reactions. The first stage reactor is maintained at 650.degree.-800.degree. F. temperature, 1000-4000 psig hydrogen partial pressure, and 10-60 lb coal/hr/ft.sup.3 reactor space velocity. The partially hydrogenated material from the first stage reaction zone is passed directly to the close-coupled second stage catalytic reaction zone maintained at a temperature at least about 25.degree. F. higher than for the first stage reactor and within a range of 750.degree.-875.degree. F. temperature for further hydrogenation and thermal hydroconversion reactions. By this process, the coal feed is successively catalytically hydrogenated and hydroconverted at selected conditions, which results in significantly increased yields of desirable low-boiling hydrocarbon liquid products and minimal production of undesirable residuum and unconverted coal and hydrocarbon gases, with use of less energy to obtain the low molecular weight products, while catalyst life is substantially increased.

  15. Phase equilibrium in argon films stabilized by homogeneous surfaces and thermodynamics of two-stage melting transition.

    PubMed

    Ustinov, E A

    2014-02-21

    Freezing of gases adsorbed on open surfaces (e.g., graphite) and in narrow pores is a widespread phenomenon which is a subject of a large number of publications. Modeling of the gas/liquid-solid transition is usually accomplished with a molecular simulation technique. However, quantitative analysis of the gas/liquid-solid coexistence and thermodynamic properties of the solid layer still encounters serious difficulties. This is mainly due to the effect of simulation box size on the lattice constant. Since the lattice constant is a function of loading and temperature, once the ordering transition has occurred, the simulation box size must be corrected in the course of simulation according to the Gibbs-Duhem equation. A significant problem is also associated with accurate prediction of the two-dimensional liquid-solid coexistence because of a small difference in densities of coexisting phases. The aim of this study is thermodynamic analysis of the two-dimensional phase coexistence in systems involving crystal-like free of defects layers in narrow slit pores. A special attention was paid to the determination of triple point temperatures. It is shown that intrinsic properties of argon monolayer adsorbed on the graphite surface are similar to those of isolated monolayer accommodated in the slit pore having width of two argon collision diameters. Analysis of the latter system is shown to be clearer and less time-consuming than the former one, which has allowed for explanation of the experimentally observed two-stage melting transition of argon monolayer on graphite without invoking the periodic surface potential modulation and orientational transition.

  16. Phase equilibrium in argon films stabilized by homogeneous surfaces and thermodynamics of two-stage melting transition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ustinov, E. A., E-mail: eustinov@mail.wplus.net

    Freezing of gases adsorbed on open surfaces (e.g., graphite) and in narrow pores is a widespread phenomenon which is a subject of a large number of publications. Modeling of the gas/liquid–solid transition is usually accomplished with a molecular simulation technique. However, quantitative analysis of the gas/liquid–solid coexistence and thermodynamic properties of the solid layer still encounters serious difficulties. This is mainly due to the effect of simulation box size on the lattice constant. Since the lattice constant is a function of loading and temperature, once the ordering transition has occurred, the simulation box size must be corrected in the coursemore » of simulation according to the Gibbs–Duhem equation. A significant problem is also associated with accurate prediction of the two-dimensional liquid–solid coexistence because of a small difference in densities of coexisting phases. The aim of this study is thermodynamic analysis of the two-dimensional phase coexistence in systems involving crystal-like free of defects layers in narrow slit pores. A special attention was paid to the determination of triple point temperatures. It is shown that intrinsic properties of argon monolayer adsorbed on the graphite surface are similar to those of isolated monolayer accommodated in the slit pore having width of two argon collision diameters. Analysis of the latter system is shown to be clearer and less time-consuming than the former one, which has allowed for explanation of the experimentally observed two-stage melting transition of argon monolayer on graphite without invoking the periodic surface potential modulation and orientational transition.« less

  17. One-stage or two-stage revision surgery for prosthetic hip joint infection--the INFORM trial: a study protocol for a randomised controlled trial.

    PubMed

    Strange, Simon; Whitehouse, Michael R; Beswick, Andrew D; Board, Tim; Burston, Amanda; Burston, Ben; Carroll, Fran E; Dieppe, Paul; Garfield, Kirsty; Gooberman-Hill, Rachael; Jones, Stephen; Kunutsor, Setor; Lane, Athene; Lenguerrand, Erik; MacGowan, Alasdair; Moore, Andrew; Noble, Sian; Simon, Joanne; Stockley, Ian; Taylor, Adrian H; Toms, Andrew; Webb, Jason; Whittaker, John-Paul; Wilson, Matthew; Wylde, Vikki; Blom, Ashley W

    2016-02-17

    Periprosthetic joint infection (PJI) affects approximately 1% of patients following total hip replacement (THR) and often results in severe physical and emotional suffering. Current surgical treatment options are debridement, antibiotics and implant retention; revision THR; excision of the joint and amputation. Revision surgery can be done as either a one-stage or two-stage operation. Both types of surgery are well-established practice in the NHS and result in similar rates of re-infection, but little is known about the impact of these treatments from the patient's perspective. The main aim of this randomised controlled trial is to determine whether there is a difference in patient-reported outcome measures 18 months after randomisation for one-stage or two-stage revision surgery. INFORM (INFection ORthopaedic Management) is an open, two-arm, multi-centre, randomised, superiority trial. We aim to randomise 148 patients with eligible PJI of the hip from approximately seven secondary care NHS orthopaedic units from across England and Wales. Patients will be randomised via a web-based system to receive either a one-stage revision or a two-stage revision THR. Blinding is not possible due to the nature of the intervention. All patients will be followed up for 18 months. The primary outcome is the WOMAC Index, which assesses hip pain, function and stiffness, collected by questionnaire at 18 months. Secondary outcomes include the following: cost-effectiveness, complications, re-infection rates, objective hip function assessment and quality of life. A nested qualitative study will explore patients' and surgeons' experiences, including their views about trial participation and randomisation. INFORM is the first ever randomised trial to compare two widely accepted surgical interventions for the treatment of PJI: one-stage and two-stage revision THR. The results of the trial will benefit patients in the future as the main focus is on patient-reported outcomes: pain, function

  18. One-dimensional GIS-based model compared with a two-dimensional model in urban floods simulation.

    PubMed

    Lhomme, J; Bouvier, C; Mignot, E; Paquier, A

    2006-01-01

    A GIS-based one-dimensional flood simulation model is presented and applied to the centre of the city of Nîmes (Gard, France), for mapping flow depths or velocities in the streets network. The geometry of the one-dimensional elements is derived from the Digital Elevation Model (DEM). The flow is routed from one element to the next using the kinematic wave approximation. At the crossroads, the flows in the downstream branches are computed using a conceptual scheme. This scheme was previously designed to fit Y-shaped pipes junctions, and has been modified here to fit X-shaped crossroads. The results were compared with the results of a two-dimensional hydrodynamic model based on the full shallow water equations. The comparison shows that good agreements can be found in the steepest streets of the study zone, but differences may be important in the other streets. Some reasons that can explain the differences between the two models are given and some research possibilities are proposed.

  19. A two-stage extraction procedure for insensitive munition (IM) explosive compounds in soils.

    PubMed

    Felt, Deborah; Gurtowski, Luke; Nestler, Catherine C; Johnson, Jared; Larson, Steven

    2016-12-01

    The Department of Defense (DoD) is developing a new category of insensitive munitions (IMs) that are more resistant to detonation or promulgation from external stimuli than traditional munition formulations. The new explosive constituent compounds are 2,4-dinitroanisole (DNAN), nitroguanidine (NQ), and nitrotriazolone (NTO). The production and use of IM formulations may result in interaction of IM component compounds with soil. The chemical properties of these IM compounds present unique challenges for extraction from environmental matrices such as soil. A two-stage extraction procedure was developed and tested using several soil types amended with known concentrations of IM compounds. This procedure incorporates both an acidified phase and an organic phase to account for the chemical properties of the IM compounds. The method detection limits (MDLs) for all IM compounds in all soil types were <5 mg/kg and met non-regulatory risk-based Regional Screening Level (RSL) criteria for soil proposed by the U.S. Army Public Health Center. At defined environmentally relevant concentrations, the average recovery of each IM compound in each soil type was consistent and greater than 85%. The two-stage extraction method decreased the influence of soil composition on IM compound recovery. UV analysis of NTO established an isosbestic point based on varied pH at a detection wavelength of 341 nm. The two-stage soil extraction method is equally effective for traditional munition compounds, a potentially important point when examining soils exposed to both traditional and insensitive munitions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. All-optical switch consisting of two-stage interferometers controlled by using saturable absorption of monolayer graphene.

    PubMed

    Oya, Masayuki; Kishikawa, Hiroki; Goto, Nobuo; Yanagiya, Shin-ichiro

    2012-11-19

    At routing nodes in future photonic networks, pico-second switching will be a key function. We propose an all-optical switch consisting of two-stage Mach-Zehnder interferometers, whose arms contain graphene saturable absorption films. Optical amplitudes along the interferometers are controlled to perform switching between two output ports instead of phase control used in conventional switches. Since only absorption is used for realizing complete switching, insertion loss of 10.2 dB is accompanied in switching. Picosecond response can be expected because of the fast response of saturable absorption of graphene. The switching characteristics are theoretically analyzed and numerically simulated by the finite-difference beam propagation method (FD-BPM).

  1. Constraint Force Equation Methodology for Modeling Multi-Body Stage Separation Dynamics

    NASA Technical Reports Server (NTRS)

    Toniolo, Matthew D.; Tartabini, Paul V.; Pamadi, Bandu N.; Hotchko, Nathaniel

    2008-01-01

    This paper discusses a generalized approach to the multi-body separation problems in a launch vehicle staging environment based on constraint force methodology and its implementation into the Program to Optimize Simulated Trajectories II (POST2), a widely used trajectory design and optimization tool. This development facilitates the inclusion of stage separation analysis into POST2 for seamless end-to-end simulations of launch vehicle trajectories, thus simplifying the overall implementation and providing a range of modeling and optimization capabilities that are standard features in POST2. Analysis and results are presented for two test cases that validate the constraint force equation methodology in a stand-alone mode and its implementation in POST2.

  2. Moderate irrigation intervals facilitate establishment of two desert shrubs in the Taklimakan Desert Highway Shelterbelt in China.

    PubMed

    Li, Congjuan; Shi, Xiang; Mohamad, Osama Abdalla; Gao, Jie; Xu, Xinwen; Xie, Yijun

    2017-01-01

    Water influences various physiological and ecological processes of plants in different ecosystems, especially in desert ecosystems. The purpose of this study is to investigate the response of physiological and morphological acclimation of two shrubs Haloxylon ammodendron and Calligonum mongolicunl to variations in irrigation intervals. The irrigation frequency was set as 1-, 2-, 4-, 8- and 12-week intervals respectively from March to October during 2012-2014 to investigate the response of physiological and morphological acclimation of two desert shrubs Haloxylon ammodendron and Calligonum mongolicunl to variations in the irrigation system. The irrigation interval significantly affected the individual-scale carbon acquisition and biomass allocation pattern of both species. Under good water conditions (1- and 2-week intervals), carbon assimilation was significantly higher than other treatments; while, under water shortage conditions (8- and 12-week intervals), there was much defoliation; and under moderate irrigation intervals (4 weeks), the assimilative organs grew gently with almost no defoliation occurring. Both studied species maintained similar ecophysiologically adaptive strategies, while C. mongolicunl was more sensitive to drought stress because of its shallow root system and preferential belowground allocation of resources. A moderate irrigation interval of 4 weeks was a suitable pattern for both plants since it not only saved water but also met the water demands of the plants.

  3. Exprimental Results of the First Two Stages of an Advanced Transonic Core Compressor Under Isolated and Multi-Stage Conditions.

    NASA Technical Reports Server (NTRS)

    Prahst, Patricia S.; Kulkarni, Sameer; Sohn, Ki H.

    2015-01-01

    NASA's Environmentally Responsible Aviation (ERA) Program calls for investigation of the technology barriers associated with improved fuel efficiency for large gas turbine engines. Under ERA, the highly loaded core compressor technology program attempts to realize the fuel burn reduction goal by increasing overall pressure ratio of the compressor to increase thermal efficiency of the engine. Study engines with overall pressure ratio of 60 to 70 are now being investigated. This means that the high pressure compressor would have to almost double in pressure ratio while keeping a high level of efficiency. NASA and GE teamed to address this challenge by testing the first two stages of an advanced GE compressor designed to meet the requirements of a very high pressure ratio core compressor. Previous test experience of a compressor which included these front two stages indicated a performance deficit relative to design intent. Therefore, the current rig was designed to run in 1-stage and 2-stage configurations in two separate tests to assess whether the bow shock of the second rotor interacting with the upstream stage contributed to the unpredicted performance deficit, or if the culprit was due to interaction of rotor 1 and stator 1. Thus, the goal was to fully understand the stage 1 performance under isolated and multi-stage conditions, and additionally to provide a detailed aerodynamic data set for CFD validation. Full use was made of steady and unsteady measurement methods to understand fluid dynamics loss source mechanisms due to rotor shock interaction and endwall losses. This paper will present the description of the compressor test article and its measured performance and operability, for both the single stage and two stage configurations. We focus the paper on measurements at 97% corrected speed with design intent vane setting angles.

  4. Cognitive task analysis-based design and authoring software for simulation training.

    PubMed

    Munro, Allen; Clark, Richard E

    2013-10-01

    The development of more effective medical simulators requires a collaborative team effort where three kinds of expertise are carefully coordinated: (1) exceptional medical expertise focused on providing complete and accurate information about the medical challenges (i.e., critical skills and knowledge) to be simulated; (2) instructional expertise focused on the design of simulation-based training and assessment methods that produce maximum learning and transfer to patient care; and (3) software development expertise that permits the efficient design and development of the software required to capture expertise, present it in an engaging way, and assess student interactions with the simulator. In this discussion, we describe a method of capturing more complete and accurate medical information for simulators and combine it with new instructional design strategies that emphasize the learning of complex knowledge. Finally, we describe three different types of software support (Development/Authoring, Run Time, and Post Run Time) required at different stages in the development of medical simulations and the instructional design elements of the software required at each stage. We describe the contributions expected of each kind of software and the different instructional control authoring support required. Reprint & Copyright © 2013 Association of Military Surgeons of the U.S.

  5. PPI-IRO: a two-stage method for protein-protein interaction extraction based on interaction relation ontology.

    PubMed

    Li, Chuan-Xi; Chen, Peng; Wang, Ru-Jing; Wang, Xiu-Jie; Su, Ya-Ru; Li, Jinyan

    2014-01-01

    Mining Protein-Protein Interactions (PPIs) from the fast-growing biomedical literature resources has been proven as an effective approach for the identification of biological regulatory networks. This paper presents a novel method based on the idea of Interaction Relation Ontology (IRO), which specifies and organises words of various proteins interaction relationships. Our method is a two-stage PPI extraction method. At first, IRO is applied in a binary classifier to determine whether sentences contain a relation or not. Then, IRO is taken to guide PPI extraction by building sentence dependency parse tree. Comprehensive and quantitative evaluations and detailed analyses are used to demonstrate the significant performance of IRO on relation sentences classification and PPI extraction. Our PPI extraction method yielded a recall of around 80% and 90% and an F1 of around 54% and 66% on corpora of AIMed and BioInfer, respectively, which are superior to most existing extraction methods.

  6. Outcome after associating liver partition and portal vein ligation for staged hepatectomy and conventional two-stage hepatectomy for colorectal liver metastases.

    PubMed

    Adam, R; Imai, K; Castro Benitez, C; Allard, M-A; Vibert, E; Sa Cunha, A; Cherqui, D; Baba, H; Castaing, D

    2016-10-01

    Although associating liver partition and portal vein ligation for staged hepatectomy (ALPPS) has been increasingly adopted by many centres, the oncological outcome for colorectal liver metastases compared with that after two-stage hepatectomy is still unknown. Between January 2010 and June 2014, all consecutive patients who underwent either ALPPS or two-stage hepatectomy for colorectal liver metastases in a single institution were included in the study. Morbidity, mortality, disease recurrence and survival were compared. The two groups were comparable in terms of clinicopathological characteristics. ALPPS was completed in all 17 patients, whereas the second-stage hepatectomy could not be completed in 15 of 41 patients. Ninety-day mortality rates for ALPPS and two-stage resection were 0 per cent (0 of 17) versus 5 per cent (2 of 41) (P = 0·891). Major complication rates (Clavien grade at least III) were 41 per cent (7 of 17) and 39 per cent (16 of 41) respectively (P = 0·999). Overall survival was significantly lower after ALPPS than after two-stage hepatectomy: 2-year survival 42 versus 77 per cent respectively (P = 0·006). Recurrent disease was more often seen in the liver in the ALPPS group. Salvage surgery was less often performed after ALPPS (2 of 8 patients) than after two-stage hepatectomy (10 of 17). Although major complication and 90-day mortality rates of ALPPS were similar to those of two-stage hepatectomy, overall survival was significantly lower following ALPPS. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.

  7. Electric and hybrid electric vehicles: A technology assessment based on a two-stage Delphi study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vyas, A.D.; Ng, H.K.; Santini, D.J.

    1997-12-01

    To address the uncertainty regarding future costs and operating attributes of electric and hybrid electric vehicles, a two stage, worldwide Delphi study was conducted. Expert opinions on vehicle attributes, current state of the technology, possible advancements, costs, and market penetration potential were sought for the years 2000, 2010, and 2020. Opinions related to such critical components as batteries, electric drive systems, and hybrid vehicle engines, as well as their respective technical and economic viabilities, were also obtained. This report contains descriptions of the survey methodology, analytical approach, and results of the analysis of survey data, together with a summary ofmore » other factors that will influence the degree of market success of electric and hybrid electric vehicle technologies. Responses by industry participants, the largest fraction among all the participating groups, are compared with the overall responses. An evaluation of changes between the two Delphi stages is also summarized. An analysis of battery replacement costs for various types is summarized, and variable operating costs for electric and hybrid vehicles are compared with those of conventional vehicles. A market penetration analysis is summarized, in which projected market shares from the survey are compared with predictions of shares on the basis of two market share projection models that use the cost and physical attributes provided by the survey. Finally, projections of market shares beyond the year 2020 are developed by use of constrained logit models of market shares, statistically fitted to the survey data.« less

  8. Ares I-X First Stage Separation Loads and Dynamics Reconstruction

    NASA Technical Reports Server (NTRS)

    Demory, Lee; Rooker, BIll; Jarmulowicz, Marc; Glaese, John

    2011-01-01

    The Ares I-X flight test provided NASA with the opportunity to test hardware and gather critical data to ensure the success of future Ares I flights. One of the primary test flight objectives was to evaluate the environment during First Stage separation to better understand the conditions that the J-2X second stage engine will experience at ignition [1]. A secondary objective was to evaluate the effectiveness of the stage separation motors. The Ares I-X flight test vehicle was successfully launched on October 29, 2009, achieving most of its primary and secondary test objectives. Ground based video camera recordings of the separation event appeared to show recontact of the First Stage and the Upper Stage Simulator followed by an unconventional tumbling of the Upper Stage Simulator. Closer inspection of the videos and flight test data showed that recontact did not occur. Also, the motion during staging was as predicted through CFD analysis performed during the Ares I-X development. This paper describes the efforts to reconstruct the vehicle dynamics and loads through the staging event by means of a time integrated simulation developed in TREETOPS, a multi-body dynamics software tool developed at NASA [2]. The simulation was built around vehicle mass and geometry properties at the time of staging and thrust profiles for the first stage solid rocket motor as well as for the booster deceleration motors and booster tumble motors. Aerodynamic forces were determined by models created from a combination of wind tunnel testing and CFD. The initial conditions such as position, velocity, and attitude were obtained from the Best Estimated Trajectory (BET), which is compiled from multiple ground based and vehicle mounted instruments. Dynamic loads were calculated by subtracting the inertial forces from the applied forces. The simulation results were compared to the Best Estimated Trajectory, accelerometer flight data, and to ground based video.

  9. Lunar lander stage requirements based on the Civil Needs Data Base

    NASA Technical Reports Server (NTRS)

    Mulqueen, John A.

    1992-01-01

    This paper examines the lunar lander stages that will be necessary for the future exploration and development of the Moon. Lunar lander stage sizing is discussed based on the projected lunar payloads listed in the Civil Needs Data Base. Factors that will influence the lander stage design are identified and discussed. Some of these factors are (1) lunar orbiting and lunar surface lander bases; (2) implications of direct landing trajectories and landing from a parking orbit; (3) implications of landing site and parking orbit; (4) implications of landing site and parking orbit selection; (5) the use of expendable and reusable lander stages; and (6) the descent/ascent trajectories. Data relating the lunar lander stage design requirements to each of the above factors and others are presented in parametric form. These data will provide useful design data that will be applicable to future mission model modifications and design studies.

  10. Two-stage preparation of magnetic sorbent based on exfoliated graphite with ferrite phases for sorption of oil and liquid hydrocarbons from the water surface

    NASA Astrophysics Data System (ADS)

    Pavlova, Julia A.; Ivanov, Andrei V.; Maksimova, Natalia V.; Pokholok, Konstantin V.; Vasiliev, Alexander V.; Malakho, Artem P.; Avdeev, Victor V.

    2018-05-01

    Due to the macropore structure and the hydrophobic properties, exfoliated graphite (EG) is considered as a perspective sorbent for oil and liquid hydrocarbons from the water surface. However, there is the problem of EG collection from the water surface. One of the solutions is the modification of EG by a magnetic compound and the collection of EG with sorbed oil using the magnetic field. In this work, the method of the two-stage preparation of exfoliated graphite with ferrite phases is proposed. This method includes the impregnation of expandable graphite in the mixed solution of iron (III) chloride and cobalt (II) or nickel (II) nitrate in the first stage and the thermal exfoliation of impregnated expandable graphite with the formation of exfoliated graphite containing cobalt and nickel ferrites in the second stage. Such two-stage method makes it possible to obtain the sorbent based on EG modified by ferrimagnetic phases with high sorption capacity toward oil (up to 45-51 g/g) and high saturation magnetization (up to 42 emu/g). On the other hand, this method allows to produce the magnetic sorbent in a short period of time (up to 10 s) during which the thermal exfoliation is carried out in the air atmosphere.

  11. Two-stage high temperature sludge gasification using the waste heat from hot blast furnace slags.

    PubMed

    Sun, Yongqi; Zhang, Zuotai; Liu, Lili; Wang, Xidong

    2015-12-01

    Nowadays, disposal of sewage sludge from wastewater treatment plants and recovery of waste heat from steel industry, become two important environmental issues and to integrate these two problems, a two-stage high temperature sludge gasification approach was investigated using the waste heat in hot slags herein. The whole process was divided into two stages, i.e., the low temperature sludge pyrolysis at ⩽ 900°C in argon agent and the high temperature char gasification at ⩾ 900°C in CO2 agent, during which the heat required was supplied by hot slags in different temperature ranges. Both the thermodynamic and kinetic mechanisms were identified and it was indicated that an Avrami-Erofeev model could best interpret the stage of char gasification. Furthermore, a schematic concept of this strategy was portrayed, based on which the potential CO yield and CO2 emission reduction achieved in China could be ∼1.92∗10(9)m(3) and 1.93∗10(6)t, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Evaluation of Flight Deck-Based Interval Management Crew Procedure Feasibility

    NASA Technical Reports Server (NTRS)

    Wilson, Sara R.; Murdoch, Jennifer L.; Hubbs, Clay E.; Swieringa, Kurt A.

    2013-01-01

    Air traffic demand is predicted to increase over the next 20 years, creating a need for new technologies and procedures to support this growth in a safe and efficient manner. The National Aeronautics and Space Administration's (NASA) Air Traffic Management Technology Demonstration - 1 (ATD-1) will operationally demonstrate the feasibility of efficient arrival operations combining ground-based and airborne NASA technologies. The integration of these technologies will increase throughput, reduce delay, conserve fuel, and minimize environmental impacts. The ground-based tools include Traffic Management Advisor with Terminal Metering for precise time-based scheduling and Controller Managed Spacing decision support tools for better managing aircraft delay with speed control. The core airborne technology in ATD-1 is Flight deck-based Interval Management (FIM). FIM tools provide pilots with speed commands calculated using information from Automatic Dependent Surveillance - Broadcast. The precise merging and spacing enabled by FIM avionics and flight crew procedures will reduce excess spacing buffers and result in higher terminal throughput. This paper describes a human-in-the-loop experiment designed to assess the acceptability and feasibility of the ATD-1 procedures used in a voice communications environment. This experiment utilized the ATD-1 integrated system of ground-based and airborne technologies. Pilot participants flew a high-fidelity fixed base simulator equipped with an airborne spacing algorithm and a FIM crew interface. Experiment scenarios involved multiple air traffic flows into the Dallas-Fort Worth Terminal Radar Control airspace. Results indicate that the proposed procedures were feasible for use by flight crews in a voice communications environment. The delivery accuracy at the achieve-by point was within +/- five seconds and the delivery precision was less than five seconds. Furthermore, FIM speed commands occurred at a rate of less than one per minute

  13. Magnetic Gauge Instrumentation on the LANL Gas-Driven Two-Stage Gun

    NASA Astrophysics Data System (ADS)

    Alcon, R. R.; Sheffield, S. A.; Martinez, A. R.; Gustavsen, R. L.

    1997-07-01

    Our gas-driven two-stage gun was designed and built to do initiation studies on insensitive high explosives as well as other equation of state experiments on inert materials. Our preferred method of measuring initiation phenomena involves the use of in-situ magnetic particle velocity gauges. In order to provide the 1-D experimental area to accommodate this type of gauging in our two-stage gun, it has a 50-mm-diameter launch tube. We have used magnetic gauging on our 72-mm bore diameter single-stage gun for over 15 years and it has proven a very effective technique for all types of shock wave experiments, including those on high explosives. This technique has now been installed on our two-stage gun. We describe the experimental method, as well as some of the difficulties that arose during the installation. Several magnetic gauge experiments have been completed on plastic and high explosive materials. Waveforms obtained in some of the experiments will be discussed. Up to 10 in-situ particle velocity measurements can be made in a single experiment. This new technique is now working quite well, as is evidenced by the data. To our knowledge, this is the first time magnetic gauging has been used on a two-stage gun.

  14. Magnetic gauge instrumentation on the LANL gas-driven two-stage gun

    NASA Astrophysics Data System (ADS)

    Alcon, R. R.; Sheffield, S. A.; Martinez, A. R.; Gustavsen, R. L.

    1998-07-01

    The LANL gas-driven two-stage gun was designed and built to do initiation studies on insensitive high explosives as well as equation of state and reaction experiments on other materials. The preferred method of measuring reaction phenomena involves the use of in-situ magnetic particle velocity gauges. In order to accommodate this type of gauging in our two-stage gun, it has a 50-mm-diameter launch tube. We have used magnetic gauging on our 72-mm bore diameter single-stage gun for over 15 years and it has proven a very effective technique for all types of shock wave experiments, including those on high explosives. This technique has now been installed on our gas-driven two-stage gun. We describe the method used, as well as some of the difficulties that arose during the installation. Several magnetic gauge experiments have been completed on plastic materials. Waveforms obtained in some of the experiments will be discussed. Up to 10 in-situ particle velocity measurements can be made in a single experiment. This new technique is now working quite well, as is evidenced by the data. To our knowledge, this is the first time magnetic gauging has been used on a two-stage gun.

  15. A monitoring system based on electric vehicle three-stage wireless charging

    NASA Astrophysics Data System (ADS)

    Hei, T.; Liu, Z. Z.; Yang, Y.; Hongxing, CHEN; Zhou, B.; Zeng, H.

    2016-08-01

    An monitoring system for three-stage wireless charging was designed. The vehicle terminal contained the core board which was used for battery information collection and charging control and the power measurement and charging control core board was provided at the transmitting terminal which communicated with receiver by Bluetooth. A touch-screen display unit was designed based on MCGS (Monitor and Control Generated System) to simulate charging behavior and to debug the system conveniently. The practical application shown that the system could be stable and reliable, and had a favorable application foreground.

  16. Hypospadias repair: Byar's two stage operation revisited.

    PubMed

    Arshad, A R

    2005-06-01

    Hypospadias is a congenital deformity characterised by an abnormally located urethral opening, that could occur anywhere proximal to its normal location on the ventral surface of glans penis to the perineum. Many operations had been described for the management of this deformity. One hundred and fifteen patients with hypospadias were treated at the Department of Plastic Surgery, Hospital Kuala Lumpur, Malaysia between September 1987 and December 2002, of which 100 had Byar's procedure performed on them. The age of the patients ranged from neonates to 26 years old. Sixty-seven patients had penoscrotal (58%), 20 had proximal penile (18%), 13 had distal penile (11%) and 15 had subcoronal hypospadias (13%). Operations performed were Byar's two-staged (100), Bracka's two-staged (11), flip-flap (2) and MAGPI operation (2). The most common complication encountered following hypospadias surgery was urethral fistula at a rate of 18%. There is a higher incidence of proximal hypospadias in the Malaysian community. Byar's procedure is a very versatile technique and can be used for all types of hypospadias. Fistula rate is 18% in this series.

  17. Energy compression of nanosecond high-voltage pulses based on two-stage hybrid scheme

    NASA Astrophysics Data System (ADS)

    Ulmaskulov, M. R.; Mesyats, G. A.; Sadykova, A. G.; Sharypov, K. A.; Shpak, V. G.; Shunailov, S. A.; Yalandin, M. I.

    2017-04-01

    Test results of high-voltage subnanosecond pulse generator with a hybrid, two-stage energy compression scheme are presented. After the first compression section with a gas discharger, a ferrite-filled gyromagnetic nonlinear transmitting line is used. The offered technical solution makes it possible to increase the voltage pulse amplitude from -185 kV to -325 kV, with a 2-ns pulse rise time minimized down to ˜180 ps. For the small output voltage amplitude of -240 kV, the shortest pulse front of ˜85 ps was obtained. The generator with maximum amplitude was utilized to form an ultra-short flow of runaway electrons in air-filled discharge gap with particles' energy approaching to 700 keV.

  18. Application of CFE/POST2 for Simulation of Launch Vehicle Stage Separation

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Tartabini, Paul V.; Toniolo, Matthew D.; Roithmayr, Carlos M.; Karlgaard, Christopher D.; Samareh, Jamshid A.

    2009-01-01

    The constraint force equation (CFE) methodology provides a framework for modeling constraint forces and moments acting at joints that connect multiple vehicles. With implementation in Program to Optimize Simulated Trajectories II (POST 2), the CFE provides a capability to simulate end-to-end trajectories of launch vehicles, including stage separation. In this paper, the CFE/POST2 methodology is applied to the Shuttle-SRB separation problem as a test and validation case. The CFE/POST2 results are compared with STS-1 flight test data.

  19. Free-Flight Test Results of Scale Models Simulating Viking Parachute/Lander Staging

    NASA Technical Reports Server (NTRS)

    Polutchko, Robert J.

    1973-01-01

    This report presents the results of Viking Aerothermodynamics Test D4-34.0. Motion picture coverage of a number of Scale model drop tests provides the data from which time-position characteristics as well as canopy shape and model system attitudes are measured. These data are processed to obtain the instantaneous drag during staging of a model simulating the Viking decelerator system during parachute staging at Mars. Through scaling laws derived prior to test (Appendix A and B) these results are used to predict such performance of the Viking decelerator parachute during staging at Mars. The tests were performed at the NASA/Kennedy Space Center (KSC) Vertical Assembly Building (VAB). Model assemblies were dropped 300 feet to a platform in High Bay No. 3. The data consist of an edited master film (negative) which is on permanent file in the NASA/LRC Library. Principal results of this investigation indicate that for Viking parachute staging at Mars: 1. Parachute staging separation distance is always positive and continuously increasing generally along the descent path. 2. At staging, the parachute drag coefficient is at least 55% of its prestage equilibrium value. One quarter minute later, it has recovered to its pre-stage value.

  20. Dense electro-optic frequency comb generated by two-stage modulation for dual-comb spectroscopy.

    PubMed

    Wang, Shuai; Fan, Xinyu; Xu, Bingxin; He, Zuyuan

    2017-10-01

    An electro-optic frequency comb enables frequency-agile comb-based spectroscopy without using sophisticated phase-locking electronics. Nevertheless, dense electro-optic frequency combs over broad spans have yet to be developed. In this Letter, we propose a straightforward and efficient method for electro-optic frequency comb generation with a small line spacing and a large span. This method is based on two-stage modulation: generating an 18 GHz line-spacing comb at the first stage and a 250 MHz line-spacing comb at the second stage. After generating an electro-optic frequency comb covering 1500 lines, we set up an easily established mutually coherent hybrid dual-comb interferometer, which combines the generated electro-optic frequency comb and a free-running mode-locked laser. As a proof of concept, this hybrid dual-comb interferometer is used to measure the absorption and dispersion profiles of the molecular transition of H 13 CN with a spectral resolution of 250 MHz.